All products manufactured by FLIR Systems are warranted against defective materials
and workmanship for a period of one (1) year from the delivery date of the original purchase, provided such products have been under normal storage, use and service, and in
accordance with FLIR Systems instruction.
Uncooled handheld infrared cameras manufactured by FLIR Systems are warranted
against defective materials and workmanship for a period of two (2) years from the delivery date of the original purchase, provided such products have been under normal storage, use and service, and in accordance with FLIR Systems instruction, and provided
that the camera has been registered within 60 days of original purchase.
Detectors for uncooled handheld infrared cameras manufactured by FLIR Systems are
warranted against defective materials and workmanship for a period of ten (10) years
from the delivery date of the original purchase, provided such products have been under
normal storage, use and service, and in accordance with FLIR Systems instruction, and
provided that the camera has been registered within 60 days of original purchase.
Products which are not manufactured by FLIR Systems but included in systems delivered by FLIR Systems to the original purchaser, carry the warranty, if any, of the particular supplier only. FLIR Systems has no responsibility whatsoever for such products.
The warranty extends only to the original purchaser and is not transferable. It is not applicable to any product which has been subjected to misuse, neglect, accident or abnormal
conditions of operation. Expendable parts are excluded from the warranty.
In the case of a defect in a product covered by this warranty the product must not be further used in order to prevent additional damage. The purchaser shall promptly report any
defect to FLIR Systems or this warranty will not apply.
FLIR Systems will, at its option, repair or replace any such defective product free of
charge if, upon inspection, it proves to be defective in material or workmanship and provided that it is returned to FLIR Systems within the said one-year period.
FLIR Systems has no other obligation or liability for defects than those set forth above.
No other warranty is expressed or implied. FLIR Systems specifically disclaims the im-
plied warranties of merchantability and fitness for a particular purpose.
FLIR Systems shall not be liable for any direct, indirect, special, incidental or consequen-
tial loss or damage, whether based on contract, tort or any other legal theory.
This warranty shall be governed by Swedish law.
Any dispute, controversy or claim arising out of or in connection with this warranty, shall
be finally settled by arbitration in accordance with the Rules of the Arbitration Institute of
the Stockholm Chamber of Commerce. The place of arbitration shall be Stockholm. The
language to be used in the arbitral proceedings shall be English.
1.2 Usage statistics
FLIR Systems reserves the right to gather anonymous usage statistics to help maintain
and improve the quality of our software and services.
1.3 Changes to registry
The registry entry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
\LmCompatibilityLevel will be automatically changed to level 2 if the FLIR Camera Monitor service detects a FLIR camera connected to the computer with a USB cable. The
modification will only be executed if the camera device implements a remote network
service that supports network logons.
#T810204; r. AA/43079/43091; en-US
1
Disclaimers1
1.4 U.S. Government Regulations
This product may be subject to U.S. Export Regulations. Please send any inquiries to exportquestions@flir.com.
The documentation must not, in whole or part, be copied, photocopied, reproduced,
translated or transmitted to any electronic medium or machine readable form without prior consent, in writing, from FLIR Systems.
Names and marks appearing on the products herein are either registered trademarks or
trademarks of FLIR Systems and/or its subsidiaries. All other trademarks, trade names
or company names referenced herein are used for identification only and are the property of their respective owners.
1.6 Quality assurance
The Quality Management System under which these products are developed and manufactured has been certified in accordance with the ISO 9001 standard.
FLIR Systems is committed to a policy of continuous development; therefore we reserve
the right to make changes and improvements on any of the products without prior notice.
http://www.gnu.org/licenses/lgpl-2.1.en.html
(Retrieved May 27, 2015)
1.8.2 Fonts (Source Han Sans)
https://github.com/adobe-fonts/source-han-sans/blob/master/LICENSE.txt
(Retrieved May 27, 2015)
#T810204; r. AA/43079/43091; en-US
2
Disclaimers1
1.8.3 Fonts (DejaVu)
http://dejavu-fonts.org/wiki/License
(Retrieved May 27, 2015)
#T810204; r. AA/43079/43091; en-US
3
2
Safety information
For best results and user safety, the following warnings and precautions should be followed when handling and operating the camera.
• Do not open the camera body for any reason. Disassembly of the camera (including
removal of the cover) can cause permanent damage and will void the warranty.
• Great care should be exercised with your camera optics. Refer to section 12.1.2 Infra-red lens, page 60 for lens cleaning.
• Operating the camera outside of the specified input voltage range or the specified operating temperature range can cause permanent damage.
• Do not image extremely high-intensity radiation sources, e.g., the sun, lasers, or arc
welders.
• The camera is a precision optical instrument and should not be exposed to excessive
shock and/or vibration.
• The camera contains static-sensitive electronics and should be handled appropriately.
• Do not put any item on the external cooling intake, to maintain the cooling of the
camera.
#T810204; r. AA/43079/43091; en-US
4
3
Notice to user
3.1User-to-user forums
Exchange ideas, problems, and infrared solutions with fellow thermographers around the
world in our user-to-user forums. To go to the forums, visit:
http://forum.infraredtraining.com/
3.2Calibration
We recommend that you send in the camera for calibration once a year. Contact your local sales office for instructions on where to send the camera.
3.3Accuracy
For very accurate results, we recommend that you wait 5 minutes after you have started
the camera before measuring a temperature.
3.4Disposal of electronic waste
As with most electronic products, this equipment must be disposed of in an environmentally friendly way, and in accordance with existing regulations for electronic waste.
Please contact your FLIR Systems representative for more details.
3.5Training
To read about infrared training, visit:
• http://www.infraredtraining.com
• http://www.irtraining.com
• http://www.irtraining.eu
3.6Documentation updates
Our manuals are updated several times per year, and we also issue product-critical notifications of changes on a regular basis.
To access the latest manuals, translations of manuals, and notifications, go to the Download tab at:
http://support.flir.com
It only takes a few minutes to register online. In the download area you will also find the
latest releases of manuals for our other products, as well as manuals for our historical
and obsolete products.
3.7Important note about this manual
FLIR Systems issues generic manuals that cover several cameras within a model line.
#T810204; r. AA/43079/43091; en-US
5
Notice to user3
This means that this manual may contain descriptions and explanations that do not apply
to your particular camera model.
3.8Note about authoritative versions
The authoritative version of this publication is English. In the event of divergences due to
translation errors, the English text has precedence.
Any late changes are first implemented in English.
#T810204; r. AA/43079/43091; en-US
6
4
Customer help
4.1General
For customer help, visit:
http://support.flir.com
4.2Submitting a question
To submit a question to the customer help team, you must be a registered user. It only
takes a few minutes to register online. If you only want to search the knowledgebase for
existing questions and answers, you do not need to be a registered user.
When you want to submit a question, make sure that you have the following information
to hand:
• The camera model
• The camera serial number
• The communication protocol, or method, between the camera and your device (for example, SD card reader, HDMI, Ethernet, USB, or FireWire)
• Device type (PC/Mac/iPhone/iPad/Android device, etc.)
• Version of any programs from FLIR Systems
• Full name, publication number, and revision number of the manual
4.3Downloads
On the customer help site you can also download the following, when applicable for the
product:
#T810204; r. AA/43079/43091; en-US
7
4
Customer help
• Firmware updates for your infrared camera.
• Program updates for your PC/Mac software.
• Freeware and evaluation versions of PC/Mac software.
• User documentation for current, obsolete, and historical products.
• Mechanical drawings (in *.dxf and *.pdf format).
• Cad data models (in *.stp format).
• Application stories.
• Technical datasheets.
• Product catalogs.
#T810204; r. AA/43079/43091; en-US
8
5
Introduction
5.1Camera system components
The FLIR X6570sc infrared camera and its accessories are delivered in a transport case
that typically contains the items below.
• FLIR X6570sc camera with removable LCD touchscreen.
• Portfolio that includes important information on the camera:
◦ Packing list.
◦ Factory acceptance report.
◦ Calibration curves (if applicable).
◦ Camera files on a CD-ROM.
◦ Optical cleaning tissue.
◦ A set of user instructions.
◦ Filter-holding tool.
◦ Micro SD card with an SD adapter.
• Camera power supply.
• Camera cables:
◦ Power supply.
◦ Gigabit Ethernet (GigE) with locks.
◦ 50 Ω coaxial cable for sync (yellow colored).
◦ 50 Ω coaxial cable for triggering (orange colored).
◦ 50 Ω coaxial cable for lock-in (green colored).
◦ 75 Ω coaxial cable for general purposes (blue colored).
◦ LCD extender cable (with right-angle USB connectors).
• LCD connector protective cap.
There may also be additional items that you have ordered such as software or CDs.
5.2System overview
The FLIR X6570sc infrared camera system has been developed by FLIR to meet the
needs of the research community. The camera makes use of an advanced 640 × 512
readout circuit (ROIC), mated to a mercury cadmium telluride (MCT) detector to cover
the 7.7–9.3 µm long-wave infrared band.
The FLIR X6570sc is a stand-alone imaging camera that interfaces to host PCs using
standard interfaces, including GigE and Camera Link Base.
#T810204; r. AA/43079/43091; en-US
9
5
Introduction
5.2.1 View from the front—M80 mount
Figure 5.1 View from the front—M80 mount.
1. Wi-Fi antenna.
2. Global status LED.
3. Lens M80 interface.
5.2.2 View from the rear
Figure 5.2 View from the rear.
1. Removable touch screen LCD.
2. External cooling intake.
3. External cooling exhaust.
#T810204; r. AA/43079/43091; en-US
10
5
Introduction
5.2.3 Back panel
Figure 5.3 Camera back panel description.
1. Power button.
2. Status LED.
3. Infrared remote sensor.
4. Sync In.
5. Sync Out.
6. Power In.
7. Lock-in In.
8. General-purpose IO.
9. Trigger In.
10. Auxiliary port.
11. GigE Vision.
12. Camera Link Base.
13. Camera Link Medium.
14. Digital video interface.
15. USB .
16. Micro SD card.
#T810204; r. AA/43079/43091; en-US
11
5
Introduction
5.3Key features
• Fast frame rate
The FLIR X6570sc series has an adjustable frame rate. Windowing allows a subset of
the total image to be selectively read out with a user-adjustable window size. The
sub-sample windows can be arbitrarily chosen and are easily defined.
• 14-bit image data
The FLIR X6570sc camera streams out 14-bit thermal images.
• Outstanding measurement accuracy
The high accuracy of ±1°C or ±1% produces sensitive thermal images. The FLIR
X6570sc camera detects temperature differences smaller than 25 mK (20 mK typical).
• CNUC calibration
CNUC is a proprietary calibration process that provides beautiful imagery and measurement stability. CNUC allows for flexible integration time adjustments without the
need to perform non-uniformity corrections. CNUC calibration also produces accurate
measurement stability regardless of exposure of the camera to ambient temperature
variations.
• Hypercal
Hypercal ensures the best measurement range with the highest sensitivity. Simply set
the desired lower and upper temperature limits, and the camera will automatically adjust to the appropriate integration (exposure) time.
• Auto-exposure
The camera automatically adjusts its temperature range to best fit the thermal scene.
• Presets
Up to eight presets and their associated parameters, e.g., integration time, frame rate,
window size, and window location, are available for instant selection with a single
command. These presets can be used in Dynamic Range Extension (DRX) mode (also called “superframing”), which allows the acquisition of thermal data from up to four
user-defined temperature ranges simultaneously, then merges those streams into a
single real-time data stream that spans all four temperature ranges, effectively extending dynamic range from 14 bit to 16 bit.
• Multiple triggering modes and synchronizing interfaces
The FLIR X6570sc camera provides different interfaces to support maximum flexibility
for synchronizing the camera to external events, as well as synchronizing external
events to the camera:
◦ Sync In (TTL).
◦ Sync Out.
◦ Trigger In.
• Multiple video outputs
The FLIR X6570sc camera features multiple independent and simultaneous video:
◦ Digital 14-bit video—Camera Link Base.
◦ Digital 14-bit video—GigE.
◦ Digital video—DVI format 1080p30 digital output.
• The FLIR X6570sc camera has an advanced high-performance optical design. The
lenses feature a professional M80 mount.
• Motorized filter wheel
The FLIR X6570sc camera has a four-slot motorized filter wheel with automatic filter
recognition and measurement parameter adjustment. The removable filter holders
contain an integrated temperature probe for improved measurement accuracy.
• Removable touch screen LCD
The detachable touch screen LCD provides you with on-site thermal image feedback.
The LCD screen also provides camera information, adjustment controls, and ResearchIR Max acquisition control. The LCD touch screen can be removed from the
#T810204; r. AA/43079/43091; en-US
12
5
Introduction
FLIR X6570sc camera when the camera needs to be installed in a hard to reach location. Simply position the camera and control it at a distance.
• Wi-Fi
The camera includes a Wi-Fi interface, which enables it to be controlled by a smart
phone (iPhone) or a tablet (iPad).
• Video color palettes
The FLIR X6570sc camera supports a selection of standard and user-defined color
palettes (or grayscale) for DVI video.
• Configuration management
Save your camera configuration to the SD card (e.g., when loaning your camera to
your colleague). To use a configuration saved on an SD card, simply insert the SD
card.
• Global-status LED
Located on the top of the camera, the global-status LED provides instant system status, including the ResearchIR Max status (a green light indicates fully acquired). The
back panel LEDs instantly inform you about the camera status.
#T810204; r. AA/43079/43091; en-US
13
6
Installing the camera
6.1Mounting the camera
The camera can be operated installed either on a workbench or mounted on a tripod or
custom mount. A standard photo mount (¼-20 UNC) or 3 × M5 mounting holes on the
camera base as well as 3 × M5 mounting holes on the left side of the camera are
provided.
1. ¼-20 UNC mount.
2. Camera base M5 mounting holes.
3. Camera left side M5 mounting holes.
6.2Powering the camera
6.2.1 Power supply
The camera is powered through the red power connector (6 in Figure 5.3 Camera back
panel description., page 11) on the back panel. When the 24 V DC power supply pro-vided with the camera (PN X1159) is connected, the power button (1 in Figure 5.3 Camera back panel description., page 11) blinks slowly, indicating that the camera is
receiving power.
Refer to section 11 Technical data, page 57 for the power supply technical data.
#T810204; r. AA/43079/43091; en-US
14
Installing the camera6
6.2.2 Power button
The power button (1 in Figure 5.3 Camera back panel description., page 11) is located
behind the touch screen LCD. Open the touch screen LCD to its maximum extension to
access the button.
Note Keep the LCD screen opened or detach it when the camera is being operated to
prevent the external cooling vent from being blocked.
A short press on the power button starts the camera.
To turn off the camera:
• A short press on the power button starts the camera shutdown procedure. The camera is switched off a few seconds later.
• A long press on the power button forces the camera to turn off immediately, bypassing
the shutdown procedure.
6.2.3 Camera boot-up and cooling down
When the camera is turned on, its Stirling cooler starts first. Stirling coolers produce
noise. A high volume of noise is normal for advanced cooled thermal cameras.
The camera requires up to 7 minutes to reach the detector temperature of 80 K. In parallel, the camera performs a built-in test of its components and initializes the internal software and interfaces.
The camera is ready to use when all the status LEDs on the back panel are green (2 in
Figure 5.3 Camera back panel description., page 11).
6.3Adjusting the field of view
Once the camera is installed and operating, its field of view is adjusted to suit the thermal
scene being imaged. This adjustment is done by selecting a suitable lens for the desired
field of view, and then fine tuning the camera position to the scene.
Use of the LCD screen is helpful during this procedure.
6.3.1 LCD screen
The FLIR X6570sc includes a detachable touch screen LCD that provides instant thermal image feedback. The LCD screen also provides camera information, adjustment
controls, and ResearchIR Max acquisition control.
In the LCD screen examples below, the camera measurement configuration and temperature range have been adapted to the thermal scene. If the camera is not correctly set up
for the scene, the image can become saturated.
The temperature range of the camera can be automatically adjusted using the auto-exposure button on the touch screen. For more information, see section 8.2.2 Auto-expo-sure, page 35.
If the thermal scene does not match the configuration measurement (i.e., the spectral filter on the filter wheel), it is necessary to select the correct configuration measurement in
ResearchIR Max. For more information, see section 6.4.4 Measurement configuration,
page 21.
#T810204; r. AA/43079/43091; en-US
15
Installing the camera6
Figure 6.1 LCD touch screen.
1. Image statistics.
2. Add spot tool: center, cold, or hot.
3. Camera information.
4. Measurement configuration.
5. Start acquisition in ResearchIR Max.
6. Auto-exposure.
7. Change the color palette.
6.3.1.1 Detaching the touch screen LCD
The LCD screen can be detached from the camera and used remotely when the camera
is mounted in a hard to reach location.
Note
• The camera can still be operated without the LCD screen connected.
• The LCD screen can be detached and attached while the camera is in operation.
6.3.1.2 Procedure
Follow the procedure below to install and detach the LCD screen from the camera:
1. Remove the LCD screw using a flat screwdriver or a coin.
2. Gently lift up the screen to disconnect it from the camera, being careful of the USB
connector.
#T810204; r. AA/43079/43091; en-US
16
Installing the camera6
3. Place the provided protective cap on the camera, to avoid dust or water entering the
camera.
6.3.1.3 General
When detached, the LCD screen can be connected to the camera using the provided
right-angled USB extender cable. An additional USB cable can be added to extend the
length. The efficiency of operating the camera in this way is highly dependent on the
quality of the additional USB cable and the environment in which the camera is being
used.
The screen has been designed for ease of use on a workbench, as shown in the figure
below.
The screen automatically detects its orientation, and flips the interface accordingly. The
orientation can be locked in the ResearchIR Max camera user interface (see section
6.4.8 Advanced camera controls, page 24).
6.3.2 Lens
A large range of lenses is available for the FLIR X6570sc. The lenses feature a professional M80 mount.
Note FLIR is continuously extending its range of available optics. Contact your FLIR
sales representative for more information on newly available optics.
6.3.2.1 Installing an infrared lens
Note
• The detector is a very sensitive sensor. It must not be directed toward strong visible
light, e.g., sunlight.
• Do not touch the lens surface when you install the lens. If this happens, clean the lens
according to the instructions in section 12.1.2 Infrared lens, page 60.
• Do not touch the filter surface when you install the lens. If this happens, clean the filter
according to the instructions in section 12.1.2 Infrared lens, page 60.
#T810204; r. AA/43079/43091; en-US
17
Installing the camera6
6.3.2.1.1 Procedure—M80 mount
Follow the procedure below to install an infrared lens with an M80 mount:
1. If present, remove the installed lens or the protection in front of the detector/filter
wheel.
2. Carefully push the infrared lens into position.
3. Rotate the infrared lens clockwise (looking at the front of the lens) until it stops.
4. You must manually select the measurement configuration using the FLIR ResearchIR
Max software. For more information, see section 6.4.4 Measurement configuration,
page 21.
6.3.2.2 Removing an infrared lens
Note
• The detector is a very sensitive sensor. It must not be directed toward strong light, e.
g., sunlight.
• Do not touch the lens surface when you install the lens. If this happens, clean the lens
according to the instructions in section 12.1.2 Infrared lens, page 60.
• Lenses can be heavy, so take care not to be surprised by their weight. Some lenses
weight several hundred grams.
• When you have removed the infrared lens, put the lens caps on the lens to protect it
from dust and fingerprints.
6.3.2.2.1 Procedure—M80 mount
Follow the procedure below to remove an infrared lens with an M80 mount:
1. Rotate the infrared lens counterclockwise (looking at the front of the lens).
2. Carefully pull out the infrared lens.
3. Install the protective cap or a new optic on the camera to avoid visible light striking
the detector.
6.3.2.3 Adjusting the camera focus
Note Do not touch the lens surface when you adjust the camera focus. If this happens,
clean the lens according to the instructions in section 12.1.2 Infrared lens, page 60.
Camera focus can be done manually by rotating the focus ring on the lens:
• For far focus, rotate the focus ring counterclockwise (looking at the front of the lens).
• For near focus, rotate the focus ring clockwise (looking at the front of the lens).
6.3.2.4 Using an extension ring
Note
• The detector is a very sensitive sensor. It must not be directed toward strong visible
light, e.g., sunlight.
• Do not touch the lens surface when you install the lens. If this happens, clean the lens
according to the instructions in section 12.1.2 Infrared lens, page 60.
• Using extension rings requires a good understanding of their radiometric effects and
the resulting measurement errors. The Infrared Training Center (ITC) offers courses
and training. For more information on any training you require, contact your FLIR sales
representative or ITC at www.infraredtraining.com.
Extension rings can be added between the camera and the infrared lens in order to
change the minimum focus distance and thus the field of view of the camera. It is possible to use more than one extension ring at the same time.
Refer to the specification sheet for your infrared lens for available extension ring sizes
and other data.
Depending on the extension ring, automatic lens identification may not function. This
means you must manually select the measurement configuration using the FLIR ResearchIR Max software.
#T810204; r. AA/43079/43091; en-US
18
Installing the camera6
6.4Setting the camera parameters
6.4.1 Connection to the computer
The camera can be connected to a computer using either Camera Link or GigE.
Although it is possible to use both interfaces in parallel, only one of these should be used
send commands to the camera. The second computer should be used only to retrieve
images.
6.4.1.1 Connection through the Camera Link interface
Camera Link is a standard data interface for high-end visible and infrared cameras. The
FLIR X6570sc uses a Camera Link Base interface in a single-tap, 16-bit configuration. In
terms of ports, the A and B ports are used, with bit A0 being the LSB of the data transferred, and bit B7 being the MSB. The header row uses the entire 16-bit value while the
pixel data has a 14-bit range, with the upper MSBs masked to 0.
The camera is connected to the computer using one camera link cable (refer to section
11 Technical data, page 57 for cable reference and Camera Link information). Connect
the cable to connector 12 in Figure 5.3 Camera back panel description., page 11.
The Camera Link mode is selected using the ResearchIR Max camera control panel interface (refer to section 6.4.8 Advanced camera controls, page 24). It should always be
set to Base for the FLIR X6570sc.
Note
• Various connector notations can be found for Camera Link medium frame grabbers
(0&1, 1&2, A&B). Make sure to connect camera connector 1 to the first port of the
frame grabber.
• ResearchIR Max software supports a variety of frame grabbers. Contact your FLIR
sales representative for more information on compatibility.
6.4.1.2 Connection through the GigE interface
The FLIR X6570sc features a GigE connection. The GigE interface can be used for image acquisition and/or camera control. The GigE interface is GigE Vision compliant.
GigE is available when the camera is in Base mode only. Refer to section 6.4.8 Ad-vanced camera controls, page 24 for mode selection.
Note
• Use only the high-quality Ethernet cable provided with the camera or a CAT 6 equivalent cable.
• The GigE driver installation procedure must be followed exactly. Contact your FLIR local support if required.
6.4.2 Connection to FLIR ResearchIR Max
Note Refer to section 6.4.1 Connection to the computer, page 19 to make sure that the
camera is correctly connected to the computer.
6.4.2.1 General
The FLIR X6570sc interfaces with the FLIR ResearchIR Max software. FLIR ResearchIR
Max is a powerful image acquisition and analysis tool. Refer to the ResearchIR Max user
manual for operating instructions. FLIR X6570sc specific camera control is described in
this document.
#T810204; r. AA/43079/43091; en-US
19
Installing the camera6
6.4.2.2 Procedure
Follow the procedure below to select and connect the camera:
1. Click the Select camera button.
2. Select the FLIR X6570sc camera.
The camera’s IP address is displayed when connected using GigE.
The Camera Link port is displayed when connected using Camera Link.
3. Click the Connect button to activate the camera connection.
Once connected, the camera control interface is populated with the camera parameters,
and the live image is displayed on the current tab.
6.4.3 Image size adjustment
6.4.3.1 General
The FLIR X6570sc can be set up to use only part of the detector. This allows the camera
can be operated at higher frame rates. The selection is done through the upper part of
the camera control panel.
#T810204; r. AA/43079/43091; en-US
20
Installing the camera6
Figure 6.2 Image size adjustment
1. Preview window. The window size can be selected by dragging the handles. The en-
tire box can be dragged to set the location.
2. The X offset can be manually set in this field.
3. The Y offset can be manually set in this field.
4. The window width can be manually set in this field.
5. The window height can be manually set in this field.
6. Set the window size to full detector size (640 × 512).
7. Set the window size to half detector size (320 × 256 centered).
8. Set the window size to quarter detector size (160 × 128 centered).
9. Refresh the preview window with the last acquired image from the camera.
10. Apply the settings to the camera.
11. If needed, calibrate the image against a homogeneous reference target (also called
“1 point NUC”).
6.4.4 Measurement configuration
6.4.4.1 General
A measurement configuration is a combination of optical setup (lens and spectral filter),
integration mode setting, and Camera Link setting.
The measurement configurations available in the camera are displayed in the FLIR ResearchIR Max interface. Each configuration is described with minimum and maximum
calibrated temperatures, the lens and filter type, the integration mode (ITR or IWR), and
the Camera Link setting (Base/Medium).
For lenses with automatic identification (see section , page ), the camera automatically
selects the measurement configuration corresponding to the lens.
You can also select the measurement configuration manually. Take care to select the
measurement configuration corresponding to the lens, filter, integration mode setting
(ITR or IWR), and the Camera Link setting (Base or Medium) in use. The configuration is
selected by clicking on it. It is then highlighted in light gray. Once selected, the camera is
automatically set to this configuration.
#T810204; r. AA/43079/43091; en-US
21
Installing the camera6
It is possible to deactivate the configuration filter by unchecking the check box shown
below.
When unchecked, all configurations available in the camera are listed. It is then possible
to select a configuration that does not match the current optical and detector setup. This
is useful for advanced users when, for instance, using an infrared lens for which no calibration files are available: in this example, the camera will provide temperature data even
if the calibration does not apply to the lens.
Note
• Only one measurement configuration is valid at a time.
• Make sure you select a configuration that matches the temperature of the scene to be
measured. If not, your measurements will be incorrect because they will be outside
the limits of the calibration.
6.4.5 Temperature range adjustment
6.4.5.1 General
The temperature range is defined by the minimum and maximum temperatures that can
be measured for a given integration time.
1. Integration time for the range. Double click on the integration time to manually enter a
value. The range is indicated in red and will be applied to the camera after clicking on
the Apply Configuration button.
2. Drag the range slider to adjust the integration time. The corresponding lower and
upper temperatures of the range are displayed. The range is indicated in red and will
be applied to the camera after clicking on the Apply Configuration button.
3. Activate the range by checking the box. If more than one range is activated, the cam-
era enters superframing mode, selecting each range in turn. Refer to section 8.6 Dy-namic range extension—superframing, page 39 for more information on
superframing.
4. The FLIR X6570sc features an automatic exposure control that automatically selects
the best integration time for the current thermal scene. Refer to section 8.2.2 Auto-ex-posure, page 35 for more information on auto-exposure.
5. The temperature range wizard automates the selection of integration times and
superframing.
#T810204; r. AA/43079/43091; en-US
22
Installing the camera6
5.1.Select the temperature range to measure.
5.2.The wizard automatically calculates the best integration times to cover the desired temperature range.
Click on the Finish button to set up the camera accordingly.
6. Apply the temperature range configuration to the camera.
7. Read the actual camera configuration.
6.4.6 Frame frequency
6.4.6.1 General
The frame rate is the number of images taken by the camera per second. Achievable
frame rates are based on the camera settings, the camera overhead, and the integration
settings.
6.4.7 Synchronizing the camera to an external signal
6.4.7.1 General
Note Refer to section 8.7 Camera synchronization, page 39 for detailed information on
synchronization.
The camera can be synchronized to an external signal. This is useful in, for example,
brake disk testing. A signal from the testing machine will synchronize the camera to the
disk speed.
#T810204; r. AA/43079/43091; en-US
23
Installing the camera6
Synchronization parameters are set through the ResearchIR Max user interface:
1. Activate/deactivate external synchronization. Select the active edge and input
impedance.
2. Based on the camera configuration (e.g., the window size, integration time, or integration mode), the maximum allowable Sync In frequency is displayed.
3. The actual Sync In signal frequency is measured by the camera and displayed here.
If the Sync In frequency is higher than the maximum allowable frame rate, a warning
message is displayed. In this case, the input signal is under-sampled.
4. The jitter on the Sync In signal, which is typically one pixel clock, is displayed here.
5. The integration time length is displayed here. The integration time is defined in the
measurement range panel.
6. A delay between the Sync In signal and the start of integration time can be defined
here.
7. Several camera signals can be routed to the Sync Out connector. The polarity of
these signals is also defined here.
6.4.8 Advanced camera controls
6.4.8.1 General
This section describes the Advanced Camera Controls.
#T810204; r. AA/43079/43091; en-US
24
Installing the camera6
Image OrientationSelect the orientation of the image at the detector
Integration Mode
Streaming ModeSelect Base or Medium. For more information,
Auto measurement configuration selectionWhen this option is checked, the camera auto-
Synchronize filter on measurement configurationWhen this option is checked (default), the filter
Lock LCD orientationFreeze the LCD screen automatic orientation.
Remote control action
level. This impacts digital radiometric outputs as
well as video outputs.
Select between integrate then read (ITR) and integrate while read (IWR). (IWR mode is not available in all camera models.)
Refer to section 8.5 Frame rate and integrationmodes, page 36 for more information about these
modes.
The integration mode impacts the available measurement ranges, depending on the calibration
configuration of the camera.
see section 6.4.1.1 Connection through the Cam-era Link interface, page 19.
The streaming mode should always be set to
Base for the FLIR X6570sc.
matically searches the measurement configuration corresponding to the exact optical path (filter
+ lens) and detector configuration.
If no measurement configuration is available in the
camera, selecting this option will have no effect.
For lenses with the M80 mount (no automatic lens
identification), this setting has no effect.
corresponding to the selected measurement configuration is automatically placed by the filter
wheel in front of the detector.
Deactivating this option should be reserved for
advanced setups where the user requires a different spectral filter for a measurement configuration.
Select the action associated with the infrared remote controller. Refer to section 7.4 Infrared re-mote, page 33 for more information on the infared
remote.
6.4.9 Extended camera information
6.4.9.1 General
Extended camera information can be found in the Extended information section in the
ResearchIR Max camera tab.
#T810204; r. AA/43079/43091; en-US
25
Installing the camera6
Temperature ProbesThe camera is equipped with various temperature
MiscellaneousThis section lists the firmware version information,
Image StatisticsThe image statistics as measured by the camera
probes that are used for improving measurement
accuracy or for camera diagnostics.
Click the refresh button to update the temperature
values.
the model, and the serial number of the camera
are shown here.
Click the refresh button to update the statistics
values.
#T810204; r. AA/43079/43091; en-US
26
7
Operation
7.1Filter wheel
The FLIR X6570sc includes a four-slot filter wheel. Each slot can hold a 1 in. (2.5 cm) diameter filter with a thickness of up to 2.5 mm. An identification system has been implemented so that the camera recognizes the inserted slot and automatically adjusts the
measurement configuration.
7.1.1 Removing an optical filter holder
Note
• This operation is undertaken close to the detector window. Take extreme care not to
touch or scratch the detector window. Contact FLIR service if you require assistance
with this operation.
• The detector is a very sensitive sensor. It must not be directed toward strong light, e.
g., sunlight. It is better to remove filters with the camera turned on, as the detector,
when cooled, is less sensitive to visible light.
• A filter holder tool is provided with the camera.
• Do not touch the filter surface when you install the filter. If this happens, clean the filter
according to the manufacturer’s instructions.
Follow the procedure below to remove a filter holder from the camera filter wheel:
1. Select the measurement configuration range corresponding to the filter to be used.
This places the filter in front of the detector, allowing access to it.
2. If there is no measurement configuration range corresponding to the filter, switch off
the camera and manually rotate the wheel to place the filter to be removed in front of
the detector.
3. Gently insert the two pins of the filter holder tool into the corresponding holes.
4. Rotate the filter holder counterclockwise to release the holder from the wheel.
5. Gently remove the holder from the camera, and store it in its case.
7.1.2 Installing an optical filter holder
Note
• This operation is undertaken close to the detector window. Take extreme care not to
touch or scratch the detector window. Contact FLIR service if you require assistance
with this operation.
• The detector is a very sensitive sensor. It must not be directed toward strong light, e.
g., sunlight. It is best to remove filters with the camera turned on, as the detector,
when cooled, is less sensitive to visible light.
• A filter holder tool is provided with the camera.
• Do not touch the filter surface when you install the filter. If this happens, clean the filter
according to the manufacturer’s instructions.
Follow the procedure below to install a filter holder in the camera filter wheel:
1. Select the measurement configuration range using the filter to be used. The corresponding filter slot is placed in front of the detector allowing access to it.
2. If there is no measurement configuration range corresponding to the filter, switch off
the camera and manually rotate the wheel to place the appropriate filter slot in front
of the detector.
#T810204; r. AA/43079/43091; en-US
27
7
Operation
3. Gently insert the two pins of the filter holder tool into the corresponding holes of the
holder to install.
4. Insert the filter holder, making sure that the two slots in the holder are in line with the
two springs on the filter wheel.
5. Rotate the filter holder clockwise until the springs are correctly maintaining the holder.
7.1.3 Filter holder identification
Each filter holder is identified by a combination of magnets glued onto the filter holder.
FLIR provides standard filter configurations with corresponding identification numbers
(IDs). At start-up, the camera scans the filter wheel and identifies the inserted holders. If
equipped with a lens with a bayonet mount, the camera also adjusts the measurement
configuration in accordance with the identified filter holders.
The IDs 40 to 58 are reserved for customer-defined holders.
7.1.4 Creating a custom filter holder
Note
• Filters are fragile. Handle them with great care.
• Do not touch the filter surface when you install the filter. If this happens, clean the filter
according to the manufacturer’s instructions.
• Wear gloves or finger tips to handle the filter.
7.1.4.1 General
You can configure a filter holder to use your own spectral filter. You need an empty holder
(P/N SC8_SC6_FILT_HOLD—contact your FLIR representative for more information on
blank filter holders).
7.1.4.2 Procedure
Follow the procedure below to assemble a filter within a filter holder:
1. Select a holder ID within the range 40 to 58. This will be used by the camera to identify your filter.
2. Convert this number to binary. For example, 40 is 101000.
3. The magnets provided with the empty holder are glued to the holder in accordance
with the binary code. For every “1” in the binary code, a magnet is glued in the appropriate hole in the holder (see the figure below), with the north pole of the magnet facing into the hole. For example, for binary code 101000, you need to place a magnet
at positions 8 and 32.
The use of Loctite Hysol 3430 A&B glue is recommended.
Note The Microsoft Windows calculator in programmer mode provides an easy
way to convert decimal numbers into binary code.
#T810204; r. AA/43079/43091; en-US
28
7
Operation
4. Place your filter in the holder. Take care to ensure correct filter orientation, to avoid errors in the radiometric measurement. Contact your filter provider for this information.
5. Gently insert the threaded filter ring, and tighten it using the filter tool. Take great care
not to damage the filter with the tool.
7.1.5 Installing two filters in the filter holder
You can install two filters in the filter holder.
Note
• The total thickness of the two filters must not exceed 2 mm.
• The order in which you install the filters will not affect their performance. However, to
avoid narcissus effects, it is recommended that the filters are installed with the least
reflective side downwards in the filter holder (i.e., toward the detector).
• Filters are fragile. Handle them with the great care.
• Do not touch the filter surface when you install the filters. If this happens, clean the fil-
ter according to the manufacturer’s instructions.
• Wear gloves or finger tips to handle the filters
• A set of filter holder equipment (filter holder, centering ring, filter spacer, threaded filter
ring, and magnets) and a filter holder tool are provided with the camera.
Follow the procedure below to install two filters in the filter holder.
1. Create a custom filter ID, as described in section 7.1.4 Creating a custom filter holder,
page 28.
2. Place the first filter in the holder.
#T810204; r. AA/43079/43091; en-US
29
7
Operation
3. Place the centering ring into the holder.
4. Clean the visible filter surface.
5. Place the filter spacer on top of the first filter.
6. Place the second filter into the holder. Make sure it seats correctly on top of the filter
spacer and is centered with the centering ring.
7. Gently insert the threaded filter ring and tighten it using the filter tool. Take great care
not to damage the filter with the tool.
7.1.6 Adding a custom filter parameter into the camera
7.1.6.1 General
Up to two filters can be mounted in a slot. A slot is defined in the camera’s slot.ini file,
and a filter’s definition is stored in a text file in the camera.
Slot.ini file is a text file containing the holder identification and the corresponding filter
numbers.
[Holder XXX]
F1 = FYYYY
F2 = FZZZZ
Where XXX is the unique identifier of the slot, and YYYYand ZZZZ are filter numbers referring to the existing FYYYY.txt and FZZZZ.txt files, respectively.
For instance, for a holder defined by ID 42 in which the filter F3221 is mounted, the following should be added to the slot.ini file:
[Holder 42]
F1 = F3221
F2 = F9999
For a holder defined by ID 45 in which the filters F3221 and F1518 are mounted, the following should be added to the slot.ini file:
[Holder 45]
#T810204; r. AA/43079/43091; en-US
30
7
Operation
F1 = F3221
F2 = F1518
Note
• Refer to section 9.2 USB connection, page 46 to access the camera files through the
USB connection.
• Refer to section 7.1.7 Filter definition file description, page 31 for the description of
the filter definition.
7.1.6.2 Procedure
1. Connect your camera to your computer using the USB port.
2. Edit the file Slot.ini file located in the location FlashFS/filters/.
3. Save and close the file Slot.ini.
4. If the filter definition files for the added holder are not present in FlashFS/filters/, they
have to be created.
5. Repeat steps 2 to 4 for all filters and holders to be added.
6. Reboot the camera (a short press on the power button) to apply the modification.
7.1.7 Filter definition file description
7.1.7.1 General
Filter definition files contain identification and spectral information for the corresponding
filters. This information is used by the camera and ResearchIR Max to adjust measurement configurations.
Filter definition files must contain all the sections described below. The values in each
section must not be longer than the specified number of characters. It may be easier to
copy an existing filter file and modify it.
#reference max 20 char
[reference]
F1201
#name max 20 char
[name]
NA_4094_4388_60%
#application max 20 char
[application]
Blue CO
#band max 10 char
[band]
MW
#material max 20 char
[material]
Silicon
#type max 20 char
[type]
Narrow
#peak in µm
[peak]
4.22
2
filter
The filter reference. This reference is used in the
Slot.ini file and must start with the capital letter “F.”
The user-friendly name that is displayed in the ResearchIR Max user interface. FLIR uses the following naming convention, but it can be freely
modified.
XX_YYYY_ZZZZ_WW%
• XX : type of filter (NA, narrow; LP, low pass; HP,
high pass; BP, band pass)
• YYYY: cut-in (in nm)
• ZZZZ: cut-off (in nm)
• WW: average transmission
Application in which the filter is used. This name
is displayed on the camera’s LCD screen.
BB: broadband midwave (1.5–5 µm)
MW: midwave (3–5 µm)
If unknown, enter “N/A.”
Filter substrate
If unknown, enter “N/A.”
Type of filter (narrow, band pass, high pass, low
pass)
If unknown, enter “N/A”.
Peak transmission
If unknown, enter “0.”
#T810204; r. AA/43079/43091; en-US
31
7
Operation
#cuton in nm
[cuton]
4094
#cutoff in nm
[cutoff]
4388
#transmission in %
[transmission]
60
#tolerance in %
[tolerance]
0.3
#[thickness] in mm
[thickness]
0.5
#spectral response max 160 char
[spectral response]
1:0;3,94:0,01;3,95:0,02;3,96:0,07;
4,04:0,04;4,06:0,01;4,08:0;6:0
Filter cut-in (in nm)
If unknown, enter “0.”
Filter cut-off (in nm)
If unknown, enter “0.”
Average filter transmission (in %)
If unknown, enter “0.”
Filter spectral tolerance
If unknown, enter “0.”
Filter substrate thickness
If unknown, enter “0”.
Spectral response curve definition
The wavelength and corresponding transmission
(the maximum is 1) are separated by a colon.
Pairs of values are separated by a semicolon.
If unknown, enter “N/A.”
7.2Camera configuration file management
7.2.1 CNUC file management
7.2.1.1 General
Note
• CNUC files are related to the measurement configurations available for the camera.
Refer to section 6.4.4 Measurement configuration, page 21.
• Accessing camera files exposes the camera system files. Do not erase or modify files
other than the configuration files.
CNUC files are accessible by an FTP connection to the camera. Refer to section 9.2
USB connection, page 46 to connect to camera files.
7.2.1.2 Procedure
1. Connect your camera to your computer through the USB port.
2. You can add or delete camera calibration files directly in the directory FlashFS/nuc/.
3. Reboot the camera to apply the modification
7.3Camera Wi-Fi application
7.3.1 General
Note Refer to section 9.1 Wi-Fi connection, page 45 to set up a Wi-Fi connection to
the camera.
A web application is available when using Wi-Fi to connect to the camera. This applica-
tion allows image recording to be started and stopped in ResearchIR Max.
7.3.2 Procedure
1. Connect your device (smartphone or computer) to your camera.
2. On a web browser, go to http://169.254.242.23.
3. Control the ResearchIR Max recording from the web page.
#T810204; r. AA/43079/43091; en-US
32
7
Operation
7.3.3 Camera web page description
1. Indicates camera status:
• Ready: The camera is running properly and providing infrared images.
• Not Ready: The camera is not providing infrared images. Check the camera status
LEDs for detailed information.
2. Indicates ResearchIR Max connection status:
• Connected: ResearchIR Max is connected to the camera and ready to acquire an
image sequence.
• Not Connected: No sequence acquisition is possible. Check ResearchIR Max status on the computer.
3. Indicates the current sequence recording status:
• Blank: Recording is not in progress in ResearchIR Max.
• Recording: ResearchIR Max is currently recording an image sequence.
4. Press the start/stop acquisition button to start or stop the image sequence acquisition
in ResearchIR Max.
7.4Infrared remote
7.4.1 General
The FLIR X6570sc can be controlled with the provided infrared remote or any XLR camera remote control using the Nikon protocol.
The actions available are as follows:
• Start acquisition in ResearchIR Max.
• Trigger 1 point NUC calibration.
• Trigger auto-exposure.
7.4.2 Procedure
Follow the procedure below to select the infrared remote action:
1. Connect the camera to ResearchIR Max.
2. In the camera tab, under Advanced Camera Control, select the infrared remote
action.
#T810204; r. AA/43079/43091; en-US
33
8
Radiometric measurement
8.1Non-uniformity correction (NUC)
8.1.1 General
NUC refers to the process by which the camera electronics correct for the differences in
the pixel-to-pixel response of each individual pixel in the detector array. The camera can
create (or allow for the user to load) a NUC table that consists of a unique gain and offset
coefficients and a bad pixel indicator for each pixel. The table is then applied in the digital
processing pipeline as shown in Figure 8.1. The result is corrected data, where each pixel responds consistently across the detector input range, creating a uniform image.
Figure 8.1 Digital process showing the application of NUC tables.
To create the NUC table, the camera images either one or two uniform temperature sources. The source is an external source provided by the user. The source should be uniform and large enough to overfill the camera’s field of view. By analyzing the pixel data
from these constant sources, the non-uniformity of the pixels can be determined and corrected. There are two types of processes that are used to create the NUC table: one
point and two point.
8.1.2 CNUC
8.1.2.1 General
CNUC is a proprietary calibration process. A camera calibrated with CNUC allows for
flexible integration time adjustments without the need to perform NUCs. Additionally, the
CNUC calibration produces accurate measurement stability regardless of the camera’s
exposure to ambient temperature variations.
A CNUC correction is valid for a specific optical configuration comprising a lens and
spectral filters combination. CNUC corrections are generated by FLIR service offices
where advanced calibration benches are available. Contact your FLIR representative for
CNUC correction on new spectral filters or infrared lenses.
The CNUC process generates a gain and offset map based on the camera’s internal parameters and environmental probes.
8.1.3 Two-point correction process
8.1.3.1 General
The two-point correction process builds a NUC table that contains individually computed
gain and offset coefficients for each pixel, as shown in Figure 8.2. Two uniform sources
are required for this correction: one source at the low end of the usable detector input
range, and a second source at the upper end.
#T810204; r. AA/43079/43091; en-US
34
Radiometric measurement8
Figure 8.2 Two-point correction.
8.1.4 One-point correction (offset correction)
8.1.4.1 General
The NUC correction is strongly dependent on the optical path in front of the detector,
and on the detector setup itself. Often, any change in the camera or detector settings will
require a new NUC. However, this change is mainly in the offset response of the image
while the gain component stays constant. An offset update simply computes a new offset
coefficient using the existing gain coefficient and corrects the image non-uniformity. An
offset update requires only one uniform source, usually set at a temperature on the lower
edge of the operational range.
One-point correction is done when clicking the calibrate button in the ResearchIR Max
camera tab (11 in Figure 6.2 Image size adjustment, page 21).
8.2Temperature calibration
8.2.1 Hypercal
8.2.1.1 General
Hypercal is a proprietary temperature measurement process that complements CNUC.
With Hypercal, for any integration time selected, the camera produces accurate measurement within ±1°C or ±1% over the configured measurement range. Therefore, it
makes the selection of the optimal measurement range for a given thermal scene an
easy task.
Note ±1°C or ±1% accuracy is standard for the FLIR X6570sc, unless explicitly specified otherwise. Typically, calibration on custom spectral filters or custom optical configurations have higher-accuracy tolerances.
8.2.2 Auto-exposure
8.2.2.1 General
Because the dynamic range of a natural thermal scene can be larger than the range of
the camera, some images taken by the camera may be saturated. When an image is in
the bottom part of the dynamic range, the sensitivity is affected; therefore, the integration
time has to be increased. Conversely, when an image is in the higher part of the dynamic
range and saturated, the integration time has to be decreased.
When activated, the camera will search for the highest integration time for which the image dynamic range is contained in the upper part of the linearity domain of the detector.
Auto-exposure can be started from ResearchIR Max (see section 6.4.5 Temperaturerange adjustment, page 22) or from the LCD screen (see Figure 6.1 LCD touch screen.,
page 16).
#T810204; r. AA/43079/43091; en-US
35
Radiometric measurement8
Note
• The auto-exposure process looks for the best integration time for the actual thermal
scene. It may be the case that this preferred integration time is not achievable because it is limited by the camera’s frame rate. In this case, the auto-exposure process
is stopped, and the preferred integration is not applied.
The auto-exposure process is not designed to handle multiple integration times.
8.3Bad pixel replacement
8.3.1 General
Once an NUC has been carried out, bad pixels can be detected and replaced. This is
done by replacing the bad pixels by the median value of the eight neighboring pixels.
There are three kinds of bad pixels:
• Bad pixels relative to the gain of the non-uniformity correction. In this case the system
will consider a pixel as bad if the gain coefficient from the NUC is lower or higher than
the predefined percentage. For instance, if the threshold is 25%, the system will determine a pixel as bad if the gain is <0.75 but >1.25.
• Bad pixels relative to the offset of the NUC. In this case the system will consider a pix-
el as a bad if the offset coefficient from the NUC table is lower or higher than the predefined threshold. For instance, if the threshold is 30% and if the range of digitization
is 16 384 digital levels (DL), the system will determine a pixel as bad if the offset is <–
4915 DL but >4915 DL.
• Bad pixels relative to its level of root-mean-square (RMS) noise. In this case the sys-
tem will consider a pixel as bad if the RMS noise is lower or higher than the predefined
threshold. For instance, if the threshold is 3.5 and the mean and standard deviation of
the noise image are, respectively, 5.0 and 1.0, the system will determine a pixel as
bad if the RMS noise is >8.5. With the absolute threshold, the system considers a pixel as bad if its value is higher than this threshold.
8.4Camera file management
8.4.1 Procedure
1. Connect your camera to a computer using the USB port.
2. You can add or delete camera calibration files directly in the directory FlashFS/nuc/.
3. Reboot the camera to apply the modification
Note
If you are using Microsoft Windows 7:
• The camera must be connected to the computer using USB.
• The USB drivers must be correctly installed.
8.5Frame rate and integration modes
8.5.1 General
The frame rate is the number of images taken by the camera per second. The integration
time is the “exposure time”—the period of time for which the camera views the scene.
Achievable frame rates are based on the camera settings, the camera overhead, and the
integration settings. A brief review of the processes that occur during a frame is needed
to understand how to determine maximum achievable frame rates.
There are two basic integration modes: integrate then read (ITR) and integrate while
read (IWR). ITR is the most basic behavior of the camera and shows the process most
clearly.
#T810204; r. AA/43079/43091; en-US
36
Radiometric measurement8
Note
• An NUC update is recommended any time an adjustment is made to either the frame
rate or the integration time, regardless of the integration mode.
• The IWR mode is not available in all camera models.
8.5.2 The ITR process
As seen in Figure 8.3, the frame generation process begins with a frame synchronization
(Frame Sync). The camera then integrates the set amount of time, goes through a fixed
dead time, transmits data, goes through a second fixed dead time, and then is ready to
start the process over again. The figure shows that the camera first completes the integration process and then reads the data out, hence the term “integrate then read.”
Figure 8.3 The ITR frame generation process.
All timings for the frame generation process are based on a 20 MHz clock, yielding a resolution of 50 ns. The minimum integration time for the FLIR X6570sc is 10 µs.
8.5.2.1 Maximum achievable frame rate in ITR—base mode
Table 8.1 Maximum frame rate (in Hz) versus image size (detector mode: ITR/base/integration time = 10
µs)
Note The IWR mode is not available in all camera models.
Follow the procedure below to select the camera integration mode (ITR or IWR):
1. Connect the camera to ResearchIR Max
2. In the camera tab, under Advanced Camera Control, select the integration mode.
8.6Dynamic range extension—superframing
The main purpose of superframing is to capture a large dynamic range event with various
integration times. Consider a rocket launch as an example. During the launch, a short integration time would be needed to monitor the plume of the rocket. However, such a
short integration time would not yield adequate images across the the rocket body. If the
integration time was increased to yield adequate images across the rocket body and its
plume, the plume would saturate the detector. Superframing cycles through up to eight
different integration periods. Below is a timing graph explaining the link between the recorded frame and the integration time in superframing mode.
Refer to section 6.4.5 Temperature range adjustment, page 22 to set up superframing.
8.7Camera synchronization
The FLIR X6570sc can be synchronized to an external signal. The synchronization applies to the timing of an individual frame. The camera features a Sync In connector (4 in
Figure 5.3 Camera back panel description., page 11) and a Sync Out connector (5 in Figure 5.3 Camera back panel description., page 11).
The FLIR X6570sc makes use of frame synchronizations to control the generation of image data. The generation of a frame consists of two phases: integration and data readout. Depending on the timing between these two events, you have two basic integration
modes: ITR and IWR. In ITR mode, integration and data readout occur sequentially. The
complete frame time is the combined total of the integration time plus the readout time.
In IWR mode, the integration phase of the current frame occurs during the readout phase
of the previous frame. In other words, the ITR and IWR refer to whether or not the camera will overlap the data readout and integration periods. In ITR mode, the data is not
overlapped, which means lower frame rates, but this process provides a less noisy image. IWR mode can achieve much faster frame rates, but with a slight increase in noise.
On frame synchronization, the camera immediately integrates, followed by data read out.
#T810204; r. AA/43079/43091; en-US
39
Radiometric measurement8
Note
• When using an external frame synchronization and preset sequencing, or superfram-
ing, the external frame synchronization should be set to comply with the ITR frame
rate limits. If the external synchronization rate is too fast, the camera will ignore synchronizations that occur before the camera is ready.
• If the frame rate is too low, the image quality may deteriorate. Contact your FLIR rep-
resentative if you have questions about low frame rates for your specific camera
model.
• Synchronization is different from triggering. The latter is described in section 8.8 Trig-
ger In, page 42.
• The IWR mode is not available in all camera models.
Figure 8.4 Frame synchronization—ITR mode.
Figure 8.5 Frame synchronization—IWR mode.
8.7.1 Sync In
8.7.1.1 General
The Sync In signal is supplied to the camera by connector 4 in Figure 5.3 Camera back
panel description., page 11. The minimum pulse width is 300 ns.
The Sync In setup is described in section 6.4.7 Synchronizing the camera to an external
signal, page 23.
8.7.1.2 Characteristics
NameValue
Amplitude
High-state minimum voltage>3.5 V
Low-state maximum voltage<0.5 V
PolarityUser selectable
Maximum frequency
Minimum pulse width300 ns
ImpedanceUser selectable. 50 Ω/10 MΩ
Protection
Connector typeCoaxial BNC jack
1. Some cameras also have a minimum frequency (unspecified). Contact your FLIR representative for more
information.
2. Example: If the detector is working at 100 in full frame, then the maximum frequency will be 100.
1
Rising-edge TTL, 0/+5 V
Maximum frame rate of the camera for a given detector configuration
Voltage peaks (500 V/<1 ns)
Overvoltage (15 V)
Reversed polarity
2
#T810204; r. AA/43079/43091; en-US
40
Radiometric measurement8
8.7.1.3 Chronogram
NameValueNotes
Jitter12.5 ns
Fixed delay690 nsPropagation through the back
Manual delay
Integration Time
–
–
1 pixel clock (80 M)
panel card + propagation from
the FPGA to the detector
Set by the user
Set by the user
8.7.1.4 LED description
LED statusDescription
OffNo signal detected. Check the connection and
GreenSignal detected and the signal voltage is correct
OrangeSignal detected but the signal voltage is
Blinking greenSignal detected.
Blinking orangeSignal detected.
signal levels.
but the signal is continuous.
incorrect.
LED is blinking at the signal frequency.
Signal voltage is correct.
LED is blinking at the signal frequency.
Signal voltage is incorrect.
8.7.2 Sync Out
8.7.2.1 General
The Sync Out signal is synchronous with the Sync In or the frame rate (if Sync In is not
selected). It can be used to synchronize other events with the camera. It is a TTL signal.
The Sync Out setup is described in section 6.4.7 Synchronizing the camera to an exter-nal signal, page 23.
8.7.2.2 Characteristics
NameValue
AmplitudeTTL signal 0/+5 V
Max frequencyMaximum frame rate of the camera for a given de-
tector configuration
#T810204; r. AA/43079/43091; en-US
1
41
Radiometric measurement8
NameValue
ImpedanceHigh impedance
Minimum pulse width300 ns
ProtectionVoltage peaks (500 V/<1 ns)
Overvoltage (15 V)
Reversed polarity
ConnectorCoaxial BNC jack
1. Example: If the detector is working at 100 in full frame, then the maximum frequency will be 100.
8.7.2.3 LED description
LED statusDescription
Off
Blinking GreenSignal ready to use.
GreenSignal not usable.
No signal.
LED is blinking at the signal frequency.
Signal voltage is 5 V continuous.
8.8Trigger In
8.8.1 General
Trigger In is used to tag images in the camera so that they are recorded by the software.
The status of the Trigger In signal at the start of integration is added to the frame header
sent to the recording software.
ΔT is a jitter of one frame period maximum. The Trigger In signal must be at least one
frame period long. All frames are sent to the computer. ResearchIR Max will start or stop
acquisition based on the Trigger In signal. This is configured in the start and stop conditions on the ResearchIR Max recording tab (left panel).
When to use this configuration
• To capture a fugitive event. The camera will acquire images, but the software only re-
cords the frame of interest.
• When a precise start for the recording time is required.
• When only a few frames are needed to be recorded
When NOT to use this configuration
• When it is needed to trigger each acquisition. In that situation, it is preferable to use
the Sync In input.
#T810204; r. AA/43079/43091; en-US
42
Radiometric measurement8
8.8.2 Characteristics
NameValue
AmplitudeRising edge TTL, 0/+5 V
High-state minimum voltage>3.5 V
Low-state maximum voltage<0.5 V
Minimum pulse widthOne frame period
ImpedanceUser selectable. 50 Ω/10 MΩ
Protection
Connector typeCoaxial BNC jack
Voltage peaks (500 V/<1 ns)
Overvoltage (15 V)
Reversed polarity
8.8.3 LED description
LED statusDescription
OffNo signal detected. Check the connection and
GreenSignal detected and the signal voltage is correct
range
Blinking green
BlinkingorangeSignal detected.
signal levels.
but the signal is continuous.
Signal detected.
Signal voltage is incorrect.
Signal detected.
LED is blinking at the signal frequency.
Signal voltage is correct.
LED is blinking at the signal frequency.
Signal voltage is incorrect.
8.9Lock-in
8.9.1 General
The lock-in technique is commonly used in thermography to improve the sensitivity of the
camera and to extract from the thermal signal the thermal effects corresponding to an external excitation in the object under evaluation.
The FLIR X6570sc features a lock-in signal input BNC connector on the back panel of
the camera (7 in Figure 5.3 Camera back panel description., page 11).
The value of the signal is digitalized during the integration of the infrared image and embedded within it. It is then recorded by ResearchIR Max and stored in the sequence file
image headers. Files recorded with ResearchIR Max can subsequently be exploited with
FLIR Thesa software. Contact your FLIR representative for further information.
Note When conducting lock-in experiments it is highly recommended to place a large
non-polarized capacitor (e.g., 1 µF 100 V or 1 µF 63 V) in series with the input.
8.9.2 Characteristics
NameValue
Amplitude150 mV < V
Frequency10 mHz < F
Maximum signal offset sweep rateHalf-amplitude per period
ImpedanceHigh Z
#T810204; r. AA/43079/43091; en-US
lock-in
lockin
< 10 V
< 6 kHz
43
Radiometric measurement8
NameValue
ProtectionPeak voltage 500 V
Clamping voltage 150 V
Rated voltage 24 V
Electrostatic discharge (ESD) contact 8 kV
ESD air 15 kV
Connector typeCoaxial BNC jack
8.10IRIG-B
The FLIR X6570sc features an IRIG-B input BNC connector (colored blue) on the back
panel of the camera (8 in Figure 5.3 Camera back panel description., page 11).
The value of the signal is digitalized during the integration of the infrared image and embedded within it. It is then recorded by ResearchIR Max and stored in the sequence file
image headers.
The supported IRIG-B formats are IRIG-B12x. The signal should be 3:1, 3 Vpp maximum
at 50 Ω or 6 Vpp for high impedance input.
#T810204; r. AA/43079/43091; en-US
44
9
Interfaces
9.1Wi-Fi connection
9.1.1 General
It is possible to connect to the FLIR X6570sc using the camera’s integrated Wi-Fi and a
peer-to-peer (ad hoc) WLAN network. This connection allows control of image acquisition in ResearchIR Max from the camera (same functions as on the LCD screen).
9.1.2 Procedure
Follow the procedure below to set up the peer-to-peer WLAN network:
1. Connect the camera.
2. Enter the password: 1234567890.
3. Configure the advanced parameters.
#T810204; r. AA/43079/43091; en-US
45
9
Interfaces
4. Connect to http://192.168.64.1/
9.2USB connection
9.2.1 General
The camera features a USB connector that is used to access to the camera’s internal file
system. Once connected, the connection allows:
• Access to the camera memory, to upload configuration and CNUC files.
• Access to the camera registry with the Res.NETutility (provided on request, see sec-
tion 4 Customer help, page 7).
#T810204; r. AA/43079/43091; en-US
46
9
Interfaces
The USB connection requires a FLIR USB driver on the camera and also on the computer. Depending on the camera, there are two scenarios:
1. The camera has a newer FLIR USB driver
When the FLIR ReseachIR Max software is installed on the computer, a valid FLIR
USB driver is automatically installed on the computer. When this FLIR USB driver is
installed, FLIR Camera Network Device is listed under Network adaptors in Windows
Device Manager. No manual installation of a USB driver is required, but you need to
configure the network interface (see section 9.2.3 Configuration of the network inter-face, page 51).
2. The camera has a legacy FLIR USB driver
Installation of the driver FLIR X8400sc - X6500sc – USB.inf is required on the computer. For more information, see section 9.2.2 USB driver installation, page 47. After
installing the driver, you need to configure the network interface (see section 9.2.3
Configuration of the network interface, page 51).
Note
• The connection type to the camera is RDNIS over a USB connection.
• The following operating systems are supported:
◦ Microsoft Windows 7 32 and 64 bit
◦ Microsoft Windows Vista 32 and 64 bit
◦ Microsoft Windows XP SP2
9.2.2 USB driver installation
This section applies to cameras with the legacy FLIR USB driver. Installation of the driver
FLIR X8400sc - X6500sc – USB.inf is required on the computer. Contact your FLIR service centre or visit http://support.flir.com to download the driver.
9.2.2.1 First time installation
9.2.2.1.1 General
When the camera is connected to the computer by a USB cable for the first time, Windows detects the camera and prompts you to select the driver.
9.2.2.1.2 Procedure
Follow the procedure below to install the USB driver:
1. Connect the camera to the computer using the USB cable.
2. At the Windows prompt, select the FLIR X8400sc - X6500sc – USB.inf file.
#T810204; r. AA/43079/43091; en-US
47
9
Interfaces
3. Allow installation of the driver, despite it not being Microsoft trusted software.
4. Configure the network interface by following the procedure in section 9.2.3 Configu-ration of the network interface, page 51.
9.2.2.2 Replacing an existing driver
9.2.2.2.1 General
When the USB driver has already been installed, follow the procedure below to update
the driver.
9.2.2.2.2 Procedure
1. Disable the Ethernet connection between the camera and the computer.
2. Connect the camera to the computer with the USB cable.
3. Open Control Panel > Device Manager.
4. Under Network adapters, find the device named FLIR Platinum USB.
5. Right-click and select Update Driver Software.
#T810204; r. AA/43079/43091; en-US
48
9
Interfaces
6. Select Browse my computer for driver software.
7. Select Let me pick from a list of device on my computer.
8. Click the Have Disk… button.
#T810204; r. AA/43079/43091; en-US
49
9
Interfaces
9. Browse and select the file FLIR X8400sc - X6500sc – USB.inf file.
10. Select Driver X8400sc – X6500sc – USB, and click the Next button.
11. Click Install this driver software anyway. The driver is now installing. If the installation
does not progress correctly, try unplugging the USB cable from the computer.
12. Restart the computer.
#T810204; r. AA/43079/43091; en-US
50
9
Interfaces
13. Device Manager now shows the new device FLIR X8400sc – X6500sc.
14. Configure the network interface by following the procedure in section 9.2.3 Configu-ration of the network interface, page 51.
9.2.3 Configuration of the network interface
9.2.3.1 Procedure
1. Go to Network Connections: open the Control Panel, click Network and Internet, click
View network status and tasks, and click Change adapter settings.
#T810204; r. AA/43079/43091; en-US
51
9
Interfaces
2. Right-click on the FLIR X8400sc – X6500sc connection and click Properties, then select Internet Protocol V4 and click the Properties button.
#T810204; r. AA/43079/43091; en-US
52
9
Interfaces
3. Set the IP address to 169.254.242.10 and the subnet mask to 255.255.255.0.
Click the OK button.
4. The camera can now be accessed:
• Use Windows Explorer to access camera files such as CNUC, lens, and filter ID
descriptors.
• Use ResNet to access the camera registry. ResNet is an internal tool. Contact
your FLIR service department for more information.
9.2.4 Accessing the camera files with Windows Explorer
9.2.4.1 General
Note
• The camera must be connected to the computer using the USB cable.
• The USB drivers must be correctly installed.
When the camera is connected to the computer with the USB cable (the Ethernet connection to the camera must be disabled), type the address of the camera in Window Explorer. The address can be in the form of:
• The camera’s IP address: 169.254.242.23.
• The camera’s SMB name: Platinum.
9.2.4.2 Procedure
The Windows 7 default configuration needs to be modified to allow correct display of the
camera files in Windows Explorer:
1. Open the Local Security Policy control panel. It can be easily found by typing “Local”
in the Windows Start Menu search field.
#T810204; r. AA/43079/43091; en-US
53
9
Interfaces
2. Set the LAN Manager authentication level to “Send LM & NTLM responses.”
without written permission from FLIR Systems, Inc. Specifications subject to change without further notice. Dimensional data is based on nominal values. Products may be subject to regional market considerations. License procedures may apply.
Product may be subject to US Export Regulations. Please refer to exportquestions@flir.com with any questions. Diversion contrary to US law is prohibited.
11
Technical data
11.1Note about technical data
FLIR Systems reserves the right to change specifications at any time without prior notice.
Please check http://support.flir.com for latest changes.
11.2Note about authoritative versions
The authoritative version of this publication is English. In the event of divergences due to
translation errors, the English text has precedence.
Any late changes are first implemented in English.
Waveform generatorSinus/triangle/square TTL 0–5 V. Frequency:
±1°C (1.8°F) or ±1%
M80
12.5 ns
0.001 Hz to 250 kHz
GigE
GigE VisionYes
GenicamOn specific configuration only
Camera link
Connector type
Wi-Fi type802.11g
Lenses
Available optics
1 × Mini MDR26
• L0306—12 mm f/2
• L0324—25 mm f/2
• L0302—50 mm f/2
• L0201—100 mm f/2
• L0113TV—200 mm f/F2
• L0215—G1 f/2
#T810204; r. AA/43079/43091; en-US
59
12
Maintenance and service
12.1Cleaning the camera
12.1.1 Camera housing, cables, and other items
12.1.1.1 Liquids
Use one of these liquids:
• Warm water
• A weak detergent solution
12.1.1.2 Equipment
A soft cloth
12.1.1.3 Procedure
Follow this procedure:
1. Soak the cloth in the liquid.
2. Twist the cloth to remove excess liquid.
3. Clean the part with the cloth.
CAUTION
Do not apply solvents or similar liquids to the camera, the cables, or other items. This can cause
damage.
12.1.2 Infrared lens
12.1.2.1 Liquids
Use one of these liquids:
• A commercial lens cleaning liquid with more than 30% isopropyl alcohol.
• 96% ethyl alcohol (C
12.1.2.2 Equipment
Cotton wool
CAUTION
If you use a lens cleaning cloth it must be dry. Do not use a lens cleaning cloth with the liquids that are
given in section 12.1.2.1 above. These liquids can cause material on the lens cleaning cloth to become
loose. This material can have an unwanted effect on the surface of the lens.
12.1.2.3 Procedure
Follow this procedure:
1. Soak the cotton wool in the liquid.
2. Twist the cotton wool to remove excess liquid.
3. Clean the lens one time only and discard the cotton wool.
2H5
OH).
WARNING
Make sure that you read all applicable MSDS (Material Safety Data Sheets) and warning labels on containers before you use a liquid: the liquids can be dangerous.
CAUTION
• Be careful when you clean the infrared lens. The lens has a delicate anti-reflective coating.
• Do not clean the infrared lens too vigorously. This can damage the anti-reflective coating.
#T810204; r. AA/43079/43091; en-US
60
Maintenance and service12
12.2Cooler maintenance
12.2.1 General
The microcooler is designed to provide maintenance-free operation for many thousands
of hours. The microcooler contains pressurized helium gas.
After several thousand hours of operation the gas pressure decreases, and cooler service is required to restore cooler performance. The cooler also contains micro ball bearings, which may exhibit wear by becoming louder.
12.2.2 Signs to watch for
The FLIR Systems microcooler is equipped with a closed-loop speed regulator, which
adjusts the cooler motor speed to regulate the detector temperature.
Typically, the cooler runs at maximum speed for 7–10 minutes (depending on model),
and then slows to about 40% of maximum speed. As the gas pressure degrades, the motor continues at maximum speed for longer and longer periods to attain operating
temperature
Eventually, as the helium pressure decreases, the motor will lose the ability to achieve
and/or maintain operating temperature. When this occurs, the camera must be returned
to FLIR Systems Customer Service Department for service.
#T810204; r. AA/43079/43091; en-US
61
13
Quality
13.1Quality assurance
The quality management system under which this product is developed and manufactured has been certified in accordance with the ISO 9001 standard. FLIR Systems is
committed to a policy of continuous development; therefore, we reserve the right to make
changes and improvements to the product described in this manual without prior notice.
13.2For the US market
Important instructions and notices for the user
Modification of this device without the express authorization of FLIR Systems Advanced
Thermal Solutions may void the user’s authority under FCC rules to operate this device.
Note This equipment generates, uses, and can radiate radio-frequency energy, and if
not installed and used in accordance with the instructions it may cause harmful interference to radio communications. It has been tested and found to comply with the limits for
a Class A computing device pursuant to Subpart J of Part 15 of FCC Rules, which are
designed to provide reasonable protection against such interference when operated in a
commercial environment. Operation of this equipment in a residential area is likely to
cause interference, in which case the user at their own expense will be required to take
whatever measures may be required to correct the interference.
13.3For the Canadian market
Industry Canada Notice
This Class A digital apparatus complies with Canadian standard ICES-003.
Note d’industrie Canada
Cet appareil numérique de Classe A est conforme à la norme NMB-003 du Canada.
13.4For the whole world
Proper disposal of electrical and electronic equipment (EEE)
The European Union (EU) has enacted Waste Electrical and Electronic Equipment Directive 2002/96/EC (WEEE), which aims to prevent EEE waste from arising; to encourage
reuse, recycling, and recovery of EEE waste; and to promote environmental
responsibility.
In accordance with these regulations, all EEE products labeled with the “crossed out
wheeled bin” either on the product itself or in the product literature must not be disposed
of in regular rubbish bins, mixed with regular household or other commercial waste, or by
other regular municipal waste collection means. Instead, and in order to prevent possible
harm to the environment or human health, all EEE products (including any cables that
came with the product) should be responsibly discarded or recycled.
To identify a responsible disposal method where you live, contact your local waste collection or recycling service, your original place of purchase or product supplier, or the responsible government authority in your area. Business users should contact their
supplier or refer to their purchase contract.
#T810204; r. AA/43079/43091; en-US
62
14
About FLIR Systems
FLIR Systems was established in 1978 to pioneer the development of high-performance
infrared imaging systems, and is the world leader in the design, manufacture, and marketing of thermal imaging systems for a wide variety of commercial, industrial, and government applications. Today, FLIR Systems embraces five major companies with
outstanding achievements in infrared technology since 1958—the Swedish AGEMA Infrared Systems (formerly AGA Infrared Systems), the three United States companies Indigo Systems, FSI, and Inframetrics, and the French company Cedip.
Since 2007, FLIR Systems has acquired several companies with world-leading expertise
in sensor technologies:
• Extech Instruments (2007)
• Ifara Tecnologías (2008)
• Salvador Imaging (2009)
• OmniTech Partners (2009)
• Directed Perception (2009)
• Raymarine (2010)
• ICx Technologies (2010)
• TackTick Marine Digital Instruments (2011)
• Aerius Photonics (2011)
• Lorex Technology (2012)
• Traficon (2012)
• MARSS (2013)
• DigitalOptics micro-optics business (2013)
• DVTEL (2015)
• Point Grey Research (2016)
• Prox Dynamics (2016)
Figure 14.1 Patent documents from the early 1960s
FLIR Systems has three manufacturing plants in the United States (Portland, OR, Boston,
MA, Santa Barbara, CA) and one in Sweden (Stockholm). Since 2007 there is also a
manufacturing plant in Tallinn, Estonia. Direct sales offices in Belgium, Brazil, China,
France, Germany, Great Britain, Hong Kong, Italy, Japan, Korea, Sweden, and the USA
—together with a worldwide network of agents and distributors—support our international customer base.
#T810204; r. AA/43079/43091; en-US
63
14
About FLIR Systems
FLIR Systems is at the forefront of innovation in the infrared camera industry. We anticipate market demand by constantly improving our existing cameras and developing new
ones. The company has set milestones in product design and development such as the
introduction of the first battery-operated portable camera for industrial inspections, and
the first uncooled infrared camera, to mention just two innovations.
Figure 14.2 1969: Thermovision Model 661. The
camera weighed approximately 25 kg (55 lb.), the
oscilloscope 20 kg (44 lb.), and the tripod 15 kg
(33 lb.). The operator also needed a 220 VAC
generator set, and a 10 L (2.6 US gallon) jar with
liquid nitrogen. To the left of the oscilloscope the
Polaroid attachment (6 kg (13 lb.)) can be seen.
Figure 14.3 2015: FLIR One, an accessory to
iPhone and Android mobile phones. Weight: 90 g
(3.2 oz.).
FLIR Systems manufactures all vital mechanical and electronic components of the camera systems itself. From detector design and manufacturing, to lenses and system electronics, to final testing and calibration, all production steps are carried out and
supervised by our own engineers. The in-depth expertise of these infrared specialists ensures the accuracy and reliability of all vital components that are assembled into your infrared camera.
14.1More than just an infrared camera
At FLIR Systems we recognize that our job is to go beyond just producing the best infrared camera systems. We are committed to enabling all users of our infrared camera systems to work more productively by providing them with the most powerful camera–
software combination. Especially tailored software for predictive maintenance, R & D,
and process monitoring is developed in-house. Most software is available in a wide variety of languages.
We support all our infrared cameras with a wide variety of accessories to adapt your
equipment to the most demanding infrared applications.
14.2Sharing our knowledge
Although our cameras are designed to be very user-friendly, there is a lot more to thermography than just knowing how to handle a camera. Therefore, FLIR Systems has
founded the Infrared Training Center (ITC), a separate business unit, that provides certified training courses. Attending one of the ITC courses will give you a truly hands-on
learning experience.
The staff of the ITC are also there to provide you with any application support you may
need in putting infrared theory into practice.
#T810204; r. AA/43079/43091; en-US
64
14
About FLIR Systems
14.3Supporting our customers
FLIR Systems operates a worldwide service network to keep your camera running at all
times. If you discover a problem with your camera, local service centers have all the
equipment and expertise to solve it within the shortest possible time. Therefore, there is
no need to send your camera to the other side of the world or to talk to someone who
does not speak your language.
#T810204; r. AA/43079/43091; en-US
65
15
Terms, laws, and definitions
TermDefinition
Absorption and emission
Apparent temperatureuncompensated reading from an infrared instrument, con-
Color paletteassigns different colors to indicate specific levels of apparent
Conductiondirect transfer of thermal energy from molecule to molecule,
Convectionheat transfer mode where a fluid is brought into motion, ei-
Diagnosticsexamination of symptoms and syndromes to determine the
Direction of heat transfer
Emissivityratio of the power radiated by real bodies to the power that is
Energy conservation
Exitant radiationradiation that leaves the surface of an object, regardless of
Heatthermal energy that is transferred between two objects (sys-
Heat transfer rate
Incident radiationradiation that strikes an object from its surroundings
IR thermographyprocess of acquisition and analysis of thermal information
Isothermreplaces certain colors in the scale with a contrasting color. It
Qualitative thermographythermography that relies on the analysis of thermal patterns
Quantitative thermographythermography that uses temperature measurement to deter-
2
The capacity or ability of an object to absorb incident radiated energy is always the same as the capacity to emit its
own energy as radiation
taining all radiation incident on the instrument, regardless of
its sources
3
temperature. Palettes can provide high or low contrast, depending on the colors used in them
caused by collisions between the molecules
ther by gravity or another force, thereby transferring heat
from one place to another
nature of faults or failures
5
Heat will spontaneously flow from hotter to colder, thereby
transferring thermal energy from one place to another
radiated by a blackbody at the same temperature and at the
same wavelength
8
The sum of the total energy contents in a closed system is
4
6
7
constant
its original sources
tems) due to their difference in temperature
9
The heat transfer rate under steady state conditions is directly proportional to the thermal conductivity of the object,
the cross-sectional area of the object through which the heat
flows, and the temperature difference between the two ends
of the object. It is inversely proportional to the length, or
thickness, of the object
10
from non-contact thermal imaging devices
marks an interval of equal apparent temperature
to reveal the existence of and to locate the position of
anomalies
mine the seriousness of an anomaly, in order to establish repair priorities
12
12
11
2. Kirchhoff’s law of thermal radiation.
3. Based on ISO 18434-1:2008 (en).
4. Based on ISO 13372:2004 (en).
5. 2nd law of thermodynamics.
6. This is a consequence of the 2nd law of thermodynamics, the law itself is more complicated.
7. Based on ISO 16714-3:2016 (en).
8. 1st law of thermodynamics.
9. Fourier’s law.
10.This is the one-dimensional form of Fourier’s law, valid for steady-state conditions.
11.Based on ISO 18434-1:2008 (en)
12.Based on ISO 10878-2013 (en).
#T810204; r. AA/43079/43091; en-US
66
15
Terms, laws, and definitions
TermDefinition
Radiative heat transferHeat transfer by the emission and absorption of thermal
Reflected apparent temperatureapparent temperature of the environment that is reflected by
Spatial resolutionability of an IR camera to resolve small objects or details
Temperaturemeasure of the average kinetic energy of the molecules and
Thermal energytotal kinetic energy of the molecules that make up the
Thermal gradientgradual change in temperature over distance
Thermal tuningprocess of putting the colors of the image on the object of
radiation
the target into the IR camera
13
atoms that make up the substance
14
object
analysis, in order to maximize contrast
13
13.Based on ISO 16714-3:2016 (en).
14.Thermal energy is part of the internal energy of an object.
#T810204; r. AA/43079/43091; en-US
67
16
Thermographic measurement
techniques
16.1Introduction
An infrared camera measures and images the emitted infrared radiation from an object.
The fact that radiation is a function of object surface temperature makes it possible for
the camera to calculate and display this temperature.
However, the radiation measured by the camera does not only depend on the temperature of the object but is also a function of the emissivity. Radiation also originates from
the surroundings and is reflected in the object. The radiation from the object and the reflected radiation will also be influenced by the absorption of the atmosphere.
To measure temperature accurately, it is therefore necessary to compensate for the effects of a number of different radiation sources. This is done on-line automatically by the
camera. The following object parameters must, however, be supplied for the camera:
• The emissivity of the object
• The reflected apparent temperature
• The distance between the object and the camera
• The relative humidity
• Temperature of the atmosphere
16.2Emissivity
The most important object parameter to set correctly is the emissivity which, in short, is a
measure of how much radiation is emitted from the object, compared to that from a perfect blackbody of the same temperature.
Normally, object materials and surface treatments exhibit emissivity ranging from approximately 0.1 to 0.95. A highly polished (mirror) surface falls below 0.1, while an oxidized
or painted surface has a higher emissivity. Oil-based paint, regardless of color in the visible spectrum, has an emissivity over 0.9 in the infrared. Human skin exhibits an emissivity 0.97 to 0.98.
Non-oxidized metals represent an extreme case of perfect opacity and high reflexivity,
which does not vary greatly with wavelength. Consequently, the emissivity of metals is
low – only increasing with temperature. For non-metals, emissivity tends to be high, and
decreases with temperature.
16.2.1 Finding the emissivity of a sample
16.2.1.1 Step 1: Determining reflected apparent temperature
Use one of the following two methods to determine reflected apparent temperature:
#T810204; r. AA/43079/43091; en-US
68
Thermographic measurement techniques16
16.2.1.1.1 Method 1: Direct method
Follow this procedure:
1. Look for possible reflection sources, considering that the incident angle = reflection
angle (a = b).
Figure 16.1 1 = Reflection source
2. If the reflection source is a spot source, modify the source by obstructing it using a
piece if cardboard.
Figure 16.2 1 = Reflection source
#T810204; r. AA/43079/43091; en-US
69
Thermographic measurement techniques16
3. Measure the radiation intensity (= apparent temperature) from the reflection source
using the following settings:
• Emissivity: 1.0
• D
: 0
obj
You can measure the radiation intensity using one of the following two methods:
You can not use a thermocouple to measure reflected apparent temperature, because a
thermocouple measures temperature, but apparent temperatrure is radiation intensity.
16.2.1.1.2 Method 2: Reflector method
Follow this procedure:
1. Crumble up a large piece of aluminum foil.
2. Uncrumble the aluminum foil and attach it to a piece of cardboard of the same size.
3. Put the piece of cardboard in front of the object you want to measure. Make sure that
the side with aluminum foil points to the camera.
4. Set the emissivity to 1.0.
#T810204; r. AA/43079/43091; en-US
70
Thermographic measurement techniques16
5. Measure the apparent temperature of the aluminum foil and write it down. The foil is
considered a perfect reflector, so its apparent temperature equals the reflected apparent temperature from the surroundings.
Figure 16.5 Measuring the apparent temperature of the aluminum foil.
16.2.1.2 Step 2: Determining the emissivity
Follow this procedure:
1. Select a place to put the sample.
2. Determine and set reflected apparent temperature according to the previous
procedure.
3. Put a piece of electrical tape with known high emissivity on the sample.
4. Heat the sample at least 20 K above room temperature. Heating must be reasonably
even.
5. Focus and auto-adjust the camera, and freeze the image.
6. Adjust Level and Span for best image brightness and contrast.
7. Set emissivity to that of the tape (usually 0.97).
8. Measure the temperature of the tape using one of the following measurement
functions:
• Isotherm (helps you to determine both the temperature and how evenly you have
heated the sample)
• Spot (simpler)
• Box Avg (good for surfaces with varying emissivity).
9. Write down the temperature.
10. Move your measurement function to the sample surface.
11. Change the emissivity setting until you read the same temperature as your previous
measurement.
12. Write down the emissivity.
Note
• Avoid forced convection
• Look for a thermally stable surrounding that will not generate spot reflections
• Use high quality tape that you know is not transparent, and has a high emissivity you
are certain of
• This method assumes that the temperature of your tape and the sample surface are
the same. If they are not, your emissivity measurement will be wrong.
#T810204; r. AA/43079/43091; en-US
71
Thermographic measurement techniques16
16.3Reflected apparent temperature
This parameter is used to compensate for the radiation reflected in the object. If the
emissivity is low and the object temperature relatively far from that of the reflected it will
be important to set and compensate for the reflected apparent temperature correctly.
16.4Distance
The distance is the distance between the object and the front lens of the camera. This
parameter is used to compensate for the following two facts:
• That radiation from the target is absorbed by the atmosphere between the object and
the camera.
• That radiation from the atmosphere itself is detected by the camera.
16.5Relative humidity
The camera can also compensate for the fact that the transmittance is also dependent
on the relative humidity of the atmosphere. To do this set the relative humidity to the correct value. For short distances and normal humidity the relative humidity can normally be
left at a default value of 50%.
16.6Other parameters
In addition, some cameras and analysis programs from FLIR Systems allow you to compensate for the following parameters:
• Atmospheric temperature – i.e. the temperature of the atmosphere between the cam-
era and the target
• External optics temperature – i.e. the temperature of any external lenses or windows
used in front of the camera
• External optics transmittance – i.e. the transmission of any external lenses or windows
used in front of the camera
#T810204; r. AA/43079/43091; en-US
72
17
About calibration
17.1Introduction
Calibration of a thermal camera is a prerequisite for temperature measurement. The calibration provides the relationship between the input signal and the physical quantity that
the user wants to measure. However, despite its widespread and frequent use, the term
“calibration” is often misunderstood and misused. Local and national differences as well
as translation-related issues create additional confusion.
Unclear terminology can lead to difficulties in communication and erroneous translations,
and subsequently to incorrect measurements due to misunderstandings and, in the worst
case, even to lawsuits.
17.2Definition—what is calibration?
The International Bureau of Weights and Measures15defines calibration16in the following
way:
an operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement
standards and corresponding indications with associated measurement uncertainties
and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.
The calibration itself may be expressed in different formats: this can be a statement, calibration function, calibration diagram
Often, the first step alone in the above definition is perceived and referred to as being
“calibration.” However, this is not (always) sufficient.
Considering the calibration procedure of a thermal camera, the first step establishes the
relation between emitted radiation (the quantity value) and the electrical output signal
(the indication). This first step of the calibration procedure consists of obtaining a homogeneous (or uniform) response when the camera is placed in front of an extended source
of radiation.
As we know the temperature of the reference source emitting the radiation, in the second
step the obtained output signal (the indication) can be related to the reference source’s
temperature (measurement result). The second step includes drift measurement and
compensation.
To be correct, calibration of a thermal camera is, strictly, not expressed through temperature. Thermal cameras are sensitive to infrared radiation: therefore, at first you obtain a
radiance correspondence, then a relationship between radiance and temperature. For
bolometer cameras used by non-R&D customers, radiance is not expressed: only the
temperature is provided.
17
, calibration curve18, or calibration table.
17.3Camera calibration at FLIR Systems
Without calibration, an infrared camera would not be able to measure either radiance or
temperature. At FLIR Systems, the calibration of uncooled microbolometer cameras with
a measurement capability is carried out during both production and service. Cooled cameras with photon detectors are often calibrated by the user with special software. With
this type of software, in theory, common handheld uncooled thermal cameras could be
calibrated by the user too. However, as this software is not suitable for reporting
purposes, most users do not have it. Non-measuring devices that are used for imaging
only do not need temperature calibration. Sometimes this is also reflected in camera terminology when talking about infrared or thermal imaging cameras compared with thermography cameras, where the latter are the measuring devices.
The calibration information, no matter if the calibration is done by FLIR Systems or the
user, is stored in calibration curves, which are expressed by mathematical functions. As
radiation intensity changes with both temperature and the distance between the object
and the camera, different curves are generated for different temperature ranges and exchangeable lenses.
17.4The differences between a calibration
performed by a user and that performed directly
at FLIR Systems
First, the reference sources that FLIR Systems uses are themselves calibrated and
traceable. This means, at each FLIR Systems site performing calibration, that the sources are controlled by an independent national authority. The camera calibration certificate is confirmation of this. It is proof that not only has the calibration been performed by
FLIR Systems but that it has also been carried out using calibrated references. Some
users own or have access to accredited reference sources, but they are very few in
number.
Second, there is a technical difference. When performing a user calibration, the result is
often (but not always) not drift compensated. This means that the values do not take into
account a possible change in the camera’s output when the camera’s internal temperature varies. This yields a larger uncertainty. Drift compensation uses data obtained in climate-controlled chambers. All FLIR Systems cameras are drift compensated when they
are first delivered to the customer and when they are recalibrated by FLIR Systems service departments.
17.5Calibration, verification and adjustment
A common misconception is to confuse calibration with verification or adjustment. Indeed, calibration is a prerequisite for verification, which provides confirmation that specified requirements are met. Verification provides objective evidence that a given item
fulfills specified requirements. To obtain the verification, defined temperatures (emitted
radiation) of calibrated and traceable reference sources are measured. The measurement results, including the deviation, are noted in a table. The verification certificate
states that these measurement results meet specified requirements. Sometimes, companies or organizations offer and market this verification certificate as a “calibration
certificate.”
Proper verification—and by extension calibration and/or recalibration—can only be
achieved when a validated protocol is respected. The process is more than placing the
camera in front of blackbodies and checking if the camera output (as temperature, for instance) corresponds to the original calibration table. It is often forgotten that a camera is
not sensitive to temperature but to radiation. Furthermore, a camera is an imaging system, not just a single sensor. Consequently, if the optical configuration allowing the camera to “collect” radiance is poor or misaligned, then the “verification” (or calibration or
recalibration) is worthless.
For instance, one has to ensure that the distance between the blackbody and the camera
as well as the diameter of the blackbody cavity are chosen so as to reduce stray radiation
and the size-of-source effect.
To summarize: a validated protocol must comply with the physical laws for radiance, and
not only those for temperature.
#T810204; r. AA/43079/43091; en-US
74
About calibration17
Calibration is also a prerequisite for adjustment, which is the set of operations carried out
on a measuring system such that the system provides prescribed indications corresponding to given values of quantities to be measured, typically obtained from measurement standards. Simplified, adjustment is a manipulation that results in instruments that
measure correctly within their specifications. In everyday language, the term “calibration”
is widely used instead of “adjustment” for measuring devices.
17.6Non-uniformity correction
When the thermal camera displays ”Calibrating…” it is adjusting for the deviation in response of each individual detector element (pixel). In thermography, this is called a ”nonuniformity correction” (NUC). It is an offset update, and the gain remains unchanged.
The European standard EN 16714-3, Non-destructive Testing—Thermographic Testing
—Part 3: Terms and Definitions, defines an NUC as “Image correction carried out by the
camera software to compensate for different sensitivities of detector elements and other
optical and geometrical disturbances.”
During the NUC (the offset update), a shutter (internal flag) is placed in the optical path,
and all the detector elements are exposed to the same amount of radiation originating
from the shutter. Therefore, in an ideal situation, they should all give the same output signal. However, each individual element has its own response, so the output is not uniform.
This deviation from the ideal result is calculated and used to mathematically perform an
image correction, which is essentially a correction of the displayed radiation signal.
Some cameras do not have an internal flag. In this case, the offset update must be performed manually using special software and an external uniform source of radiation.
An NUC is performed, for example, at start-up, when changing a measurement range, or
when the environment temperature changes. Some cameras also allow the user to trigger it manually. This is useful when you have to perform a critical measurement with as
little image disturbance as possible.
17.7Thermal image adjustment (thermal
tuning)
Some people use the term “image calibration” when adjusting the thermal contrast and
brightness in the image to enhance specific details. During this operation, the temperature interval is set in such a way that all available colors are used to show only (or mainly)
the temperatures in the region of interest. The correct term for this manipulation is “thermal image adjustment” or “thermal tuning”, or, in some languages, “thermal image optimization.” You must be in manual mode to undertake this, otherwise the camera will set the
lower and upper limits of the displayed temperature interval automatically to the coldest
and hottest temperatures in the scene.
#T810204; r. AA/43079/43091; en-US
75
18
History of infrared technology
Before the year 1800, the existence of the infrared portion of the electromagnetic spectrum wasn't even suspected. The original significance of the infrared spectrum, or simply
‘the infrared’ as it is often called, as a form of heat radiation is perhaps less obvious today than it was at the time of its discovery by Herschel in 1800.
Figure 18.1 Sir William Herschel (1738–1822)
The discovery was made accidentally during the search for a new optical material. Sir
William Herschel – Royal Astronomer to King George III of England, and already famous
for his discovery of the planet Uranus – was searching for an optical filter material to reduce the brightness of the sun’s image in telescopes during solar observations. While
testing different samples of colored glass which gave similar reductions in brightness he
was intrigued to find that some of the samples passed very little of the sun’s heat, while
others passed so much heat that he risked eye damage after only a few seconds’
observation.
Herschel was soon convinced of the necessity of setting up a systematic experiment,
with the objective of finding a single material that would give the desired reduction in
brightness as well as the maximum reduction in heat. He began the experiment by actually repeating Newton’s prism experiment, but looking for the heating effect rather than
the visual distribution of intensity in the spectrum. He first blackened the bulb of a sensitive mercury-in-glass thermometer with ink, and with this as his radiation detector he proceeded to test the heating effect of the various colors of the spectrum formed on the top
of a table by passing sunlight through a glass prism. Other thermometers, placed outside
the sun’s rays, served as controls.
As the blackened thermometer was moved slowly along the colors of the spectrum, the
temperature readings showed a steady increase from the violet end to the red end. This
was not entirely unexpected, since the Italian researcher, Landriani, in a similar experiment in 1777 had observed much the same effect. It was Herschel, however, who was
the first to recognize that there must be a point where the heating effect reaches a maximum, and that measurements confined to the visible portion of the spectrum failed to locate this point.
Figure 18.2 Marsilio Landriani (1746–1815)
Moving the thermometer into the dark region beyond the red end of the spectrum, Herschel confirmed that the heating continued to increase. The maximum point, when he
found it, lay well beyond the red end – in what is known today as the ‘infrared
wavelengths’.
#T810204; r. AA/43079/43091; en-US
76
18
History of infrared technology
When Herschel revealed his discovery, he referred to this new portion of the electromagnetic spectrum as the ‘thermometrical spectrum’. The radiation itself he sometimes referred to as ‘dark heat’, or simply ‘the invisible rays’. Ironically, and contrary to popular
opinion, it wasn't Herschel who originated the term ‘infrared’. The word only began to appear in print around 75 years later, and it is still unclear who should receive credit as the
originator.
Herschel’s use of glass in the prism of his original experiment led to some early controversies with his contemporaries about the actual existence of the infrared wavelengths.
Different investigators, in attempting to confirm his work, used various types of glass indiscriminately, having different transparencies in the infrared. Through his later experiments, Herschel was aware of the limited transparency of glass to the newly-discovered
thermal radiation, and he was forced to conclude that optics for the infrared would probably be doomed to the use of reflective elements exclusively (i.e. plane and curved mirrors). Fortunately, this proved to be true only until 1830, when the Italian investigator,
Melloni, made his great discovery that naturally occurring rock salt (NaCl) – which was
available in large enough natural crystals to be made into lenses and prisms – is remarkably transparent to the infrared. The result was that rock salt became the principal infrared optical material, and remained so for the next hundred years, until the art of synthetic
crystal growing was mastered in the 1930’s.
Figure 18.3 Macedonio Melloni (1798–1854)
Thermometers, as radiation detectors, remained unchallenged until 1829, the year Nobili
invented the thermocouple. (Herschel’s own thermometer could be read to 0.2 °C
(0.036 °F), and later models were able to be read to 0.05 °C (0.09 °F)). Then a breakthrough occurred; Melloni connected a number of thermocouples in series to form the
first thermopile. The new device was at least 40 times as sensitive as the best thermometer of the day for detecting heat radiation – capable of detecting the heat from a person
standing three meters away.
The first so-called ‘heat-picture’ became possible in 1840, the result of work by Sir John
Herschel, son of the discoverer of the infrared and a famous astronomer in his own right.
Based upon the differential evaporation of a thin film of oil when exposed to a heat pattern focused upon it, the thermal image could be seen by reflected light where the interference effects of the oil film made the image visible to the eye. Sir John also managed
to obtain a primitive record of the thermal image on paper, which he called a
‘thermograph’.
#T810204; r. AA/43079/43091; en-US
77
18
History of infrared technology
Figure 18.4 Samuel P. Langley (1834–1906)
The improvement of infrared-detector sensitivity progressed slowly. Another major breakthrough, made by Langley in 1880, was the invention of the bolometer. This consisted of
a thin blackened strip of platinum connected in one arm of a Wheatstone bridge circuit
upon which the infrared radiation was focused and to which a sensitive galvanometer responded. This instrument is said to have been able to detect the heat from a cow at a
distance of 400 meters.
An English scientist, Sir James Dewar, first introduced the use of liquefied gases as cooling agents (such as liquid nitrogen with a temperature of –196°C (–320.8°F)) in low temperature research. In 1892 he invented a unique vacuum insulating container in which it
is possible to store liquefied gases for entire days. The common ‘thermos bottle’, used
for storing hot and cold drinks, is based upon his invention.
Between the years 1900 and 1920, the inventors of the world ‘discovered’ the infrared.
Many patents were issued for devices to detect personnel, artillery, aircraft, ships – and
even icebergs. The first operating systems, in the modern sense, began to be developed
during the 1914–18 war, when both sides had research programs devoted to the military
exploitation of the infrared. These programs included experimental systems for enemy
intrusion/detection, remote temperature sensing, secure communications, and ‘flying torpedo’ guidance. An infrared search system tested during this period was able to detect
an approaching airplane at a distance of 1.5 km (0.94 miles), or a person more than 300
meters (984 ft.) away.
The most sensitive systems up to this time were all based upon variations of the bolometer idea, but the period between the two wars saw the development of two revolutionary
new infrared detectors: the image converter and the photon detector. At first, the image
converter received the greatest attention by the military, because it enabled an observer
for the first time in history to literally ‘see in the dark’. However, the sensitivity of the image converter was limited to the near infrared wavelengths, and the most interesting military targets (i.e. enemy soldiers) had to be illuminated by infrared search beams. Since
this involved the risk of giving away the observer’s position to a similarly-equipped enemy
observer, it is understandable that military interest in the image converter eventually
faded.
The tactical military disadvantages of so-called 'active’ (i.e. search beam-equipped) thermal imaging systems provided impetus following the 1939–45 war for extensive secret
military infrared-research programs into the possibilities of developing ‘passive’ (no
search beam) systems around the extremely sensitive photon detector. During this period, military secrecy regulations completely prevented disclosure of the status of infraredimaging technology. This secrecy only began to be lifted in the middle of the 1950’s, and
from that time adequate thermal-imaging devices finally began to be available to civilian
science and industry.
#T810204; r. AA/43079/43091; en-US
78
19
Theory of thermography
19.1Introduction
The subjects of infrared radiation and the related technique of thermography are still new
to many who will use an infrared camera. In this section the theory behind thermography
will be given.
19.2The electromagnetic spectrum
The electromagnetic spectrum is divided arbitrarily into a number of wavelength regions,
called bands, distinguished by the methods used to produce and detect the radiation.
There is no fundamental difference between radiation in the different bands of the electromagnetic spectrum. They are all governed by the same laws and the only differences
are those due to differences in wavelength.
Thermography makes use of the infrared spectral band. At the short-wavelength end the
boundary lies at the limit of visual perception, in the deep red. At the long-wavelength
end it merges with the microwave radio wavelengths, in the millimeter range.
The infrared band is often further subdivided into four smaller bands, the boundaries of
which are also arbitrarily chosen. They include: the near infrared (0.75–3 μm), the middleinfrared (3–6 μm), the far infrared (6–15 μm) and the extreme infrared (15–100 μm).
Although the wavelengths are given in μm (micrometers), other units are often still used
to measure wavelength in this spectral region, e.g. nanometer (nm) and Ångström (Å).
The relationships between the different wavelength measurements is:
19.3Blackbody radiation
A blackbody is defined as an object which absorbs all radiation that impinges on it at any
wavelength. The apparent misnomer black relating to an object emitting radiation is explained by Kirchhoff’s Law (after Gustav Robert Kirchhoff, 1824–1887), which states that
a body capable of absorbing all radiation at any wavelength is equally capable in the
emission of radiation.
#T810204; r. AA/43079/43091; en-US
79
19
Theory of thermography
Figure 19.2 Gustav Robert Kirchhoff (1824–1887)
The construction of a blackbody source is, in principle, very simple. The radiation characteristics of an aperture in an isotherm cavity made of an opaque absorbing material represents almost exactly the properties of a blackbody. A practical application of the
principle to the construction of a perfect absorber of radiation consists of a box that is
light tight except for an aperture in one of the sides. Any radiation which then enters the
hole is scattered and absorbed by repeated reflections so only an infinitesimal fraction
can possibly escape. The blackness which is obtained at the aperture is nearly equal to
a blackbody and almost perfect for all wavelengths.
By providing such an isothermal cavity with a suitable heater it becomes what is termed
a cavity radiator. An isothermal cavity heated to a uniform temperature generates blackbody radiation, the characteristics of which are determined solely by the temperature of
the cavity. Such cavity radiators are commonly used as sources of radiation in temperature reference standards in the laboratory for calibrating thermographic instruments,
such as a FLIR Systems camera for example.
If the temperature of blackbody radiation increases to more than 525°C (977°F), the
source begins to be visible so that it appears to the eye no longer black. This is the incipient red heat temperature of the radiator, which then becomes orange or yellow as the
temperature increases further. In fact, the definition of the so-called color temperature of
an object is the temperature to which a blackbody would have to be heated to have the
same appearance.
Now consider three expressions that describe the radiation emitted from a blackbody.
19.3.1 Planck’s law
Figure 19.3 Max Planck (1858–1947)
Max Planck (1858–1947) was able to describe the spectral distribution of the radiation
from a blackbody by means of the following formula:
#T810204; r. AA/43079/43091; en-US
80
19
Theory of thermography
where:
W
λb
c
hPlanck’s constant = 6.6 × 10
k
TAbsolute temperature (K) of a blackbody.
λWavelength (μm).
Blackbody spectral radiant emittance at wavelength λ.
Velocity of light = 3 × 10
Boltzmann’s constant = 1.4 × 10
8
m/s
-34
Joule sec.
-23
Joule/K.
Note The factor 10-6is used since spectral emittance in the curves is expressed in
2
Watt/m
, μm.
Planck’s formula, when plotted graphically for various temperatures, produces a family of
curves. Following any particular Planck curve, the spectral emittance is zero at λ = 0,
then increases rapidly to a maximum at a wavelength λ
and after passing it ap-
max
proaches zero again at very long wavelengths. The higher the temperature, the shorter
the wavelength at which maximum occurs.
Figure 19.4 Blackbody spectral radiant emittance according to Planck’s law, plotted for various absolute
temperatures. 1: Spectral radiant emittance (W/cm
2
× 103(μm)); 2: Wavelength (μm)
19.3.2 Wien’s displacement law
By differentiating Planck’s formula with respect to λ, and finding the maximum, we have:
This is Wien’s formula (after Wilhelm Wien, 1864–1928), which expresses mathematically the common observation that colors vary from red to orange or yellow as the temperature of a thermal radiator increases. The wavelength of the color is the same as the
wavelength calculated for λ
. A good approximation of the value of λ
max
for a given
max
blackbody temperature is obtained by applying the rule-of-thumb 3 000/T μm. Thus, a
very hot star such as Sirius (11 000 K), emitting bluish-white light, radiates with the peak
of spectral radiant emittance occurring within the invisible ultraviolet spectrum, at wavelength 0.27 μm.
#T810204; r. AA/43079/43091; en-US
81
19
Theory of thermography
Figure 19.5 Wilhelm Wien (1864–1928)
The sun (approx. 6 000 K) emits yellow light, peaking at about 0.5 μm in the middle of
the visible light spectrum.
At room temperature (300 K) the peak of radiant emittance lies at 9.7 μm, in the far infrared, while at the temperature of liquid nitrogen (77 K) the maximum of the almost insignificant amount of radiant emittance occurs at 38 μm, in the extreme infrared wavelengths.
Figure 19.6 Planckian curves plotted on semi-log scales from 100 K to 1000 K. The dotted line represents
the locus of maximum radiant emittance at each temperature as described by Wien's displacement law. 1:
Spectral radiant emittance (W/cm
2
(μm)); 2: Wavelength (μm).
19.3.3 Stefan-Boltzmann's law
By integrating Planck’s formula from λ = 0 to λ = ∞, we obtain the total radiant emittance
(W
) of a blackbody:
b
This is the Stefan-Boltzmann formula (after Josef Stefan, 1835–1893, and Ludwig Boltz-mann, 1844–1906), which states that the total emissive power of a blackbody is proportional to the fourth power of its absolute temperature. Graphically, W
represents the
b
area below the Planck curve for a particular temperature. It can be shown that the radiant
emittance in the interval λ = 0 to λ
is only 25% of the total, which represents about the
max
amount of the sun’s radiation which lies inside the visible light spectrum.
#T810204; r. AA/43079/43091; en-US
82
19
Theory of thermography
Figure 19.7 Josef Stefan (1835–1893), and Ludwig Boltzmann (1844–1906)
Using the Stefan-Boltzmann formula to calculate the power radiated by the human body,
at a temperature of 300 K and an external surface area of approx. 2 m
2
, we obtain 1 kW.
This power loss could not be sustained if it were not for the compensating absorption of
radiation from surrounding surfaces, at room temperatures which do not vary too drastically from the temperature of the body – or, of course, the addition of clothing.
19.3.4 Non-blackbody emitters
So far, only blackbody radiators and blackbody radiation have been discussed. However,
real objects almost never comply with these laws over an extended wavelength region –
although they may approach the blackbody behavior in certain spectral intervals. For example, a certain type of white paint may appear perfectly white in the visible light spectrum, but becomes distinctly gray at about 2 μm, and beyond 3 μm it is almost black.
There are three processes which can occur that prevent a real object from acting like a
blackbody: a fraction of the incident radiation α may be absorbed, a fraction ρ may be reflected, and a fraction τ may be transmitted. Since all of these factors are more or less
wavelength dependent, the subscript λ is used to imply the spectral dependence of their
definitions. Thus:
• The spectral absorptance α
= the ratio of the spectral radiant power absorbed by an
λ
object to that incident upon it.
• The spectral reflectance ρ
= the ratio of the spectral radiant power reflected by an ob-
λ
ject to that incident upon it.
• The spectral transmittance τ
= the ratio of the spectral radiant power transmitted
λ
through an object to that incident upon it.
The sum of these three factors must always add up to the whole at any wavelength, so
we have the relation:
For opaque materials τλ= 0 and the relation simplifies to:
Another factor, called the emissivity, is required to describe the fraction ε of the radiant
emittance of a blackbody produced by an object at a specific temperature. Thus, we
have the definition:
The spectral emissivity ε
= the ratio of the spectral radiant power from an object to that
λ
from a blackbody at the same temperature and wavelength.
Expressed mathematically, this can be written as the ratio of the spectral emittance of
the object to that of a blackbody as follows:
Generally speaking, there are three types of radiation source, distinguished by the ways
in which the spectral emittance of each varies with wavelength.
• A blackbody, for which ε
• A graybody, for which ε
#T810204; r. AA/43079/43091; en-US
= ε = 1
λ
= ε = constant less than 1
λ
83
19
Theory of thermography
• A selective radiator, for which ε varies with wavelength
According to Kirchhoff’s law, for any material the spectral emissivity and spectral absorp-
tance of a body are equal at any specified temperature and wavelength. That is:
From this we obtain, for an opaque material (since αλ+ ρλ= 1):
For highly polished materials ελapproaches zero, so that for a perfectly reflecting material (i.e. a perfect mirror) we have:
For a graybody radiator, the Stefan-Boltzmann formula becomes:
This states that the total emissive power of a graybody is the same as a blackbody at the
same temperature reduced in proportion to the value of ε from the graybody.
Figure 19.8 Spectral radiant emittance of three types of radiators. 1: Spectral radiant emittance; 2: Wavelength; 3: Blackbody; 4: Selective radiator; 5: Graybody.
Figure 19.9 Spectral emissivity of three types of radiators. 1: Spectral emissivity; 2: Wavelength; 3: Blackbody; 4: Graybody; 5: Selective radiator.
#T810204; r. AA/43079/43091; en-US
84
19
Theory of thermography
19.4Infrared semi-transparent materials
Consider now a non-metallic, semi-transparent body – let us say, in the form of a thick flat
plate of plastic material. When the plate is heated, radiation generated within its volume
must work its way toward the surfaces through the material in which it is partially absorbed. Moreover, when it arrives at the surface, some of it is reflected back into the interior. The back-reflected radiation is again partially absorbed, but some of it arrives at the
other surface, through which most of it escapes; part of it is reflected back again.
Although the progressive reflections become weaker and weaker they must all be added
up when the total emittance of the plate is sought. When the resulting geometrical series
is summed, the effective emissivity of a semi-transparent plate is obtained as:
When the plate becomes opaque this formula is reduced to the single formula:
This last relation is a particularly convenient one, because it is often easier to measure
reflectance than to measure emissivity directly.
#T810204; r. AA/43079/43091; en-US
85
20
The measurement formula
As already mentioned, when viewing an object, the camera receives radiation not only
from the object itself. It also collects radiation from the surroundings reflected via the object surface. Both these radiation contributions become attenuated to some extent by the
atmosphere in the measurement path. To this comes a third radiation contribution from
the atmosphere itself.
This description of the measurement situation, as illustrated in the figure below, is so far
a fairly true description of the real conditions. What has been neglected could for instance be sun light scattering in the atmosphere or stray radiation from intense radiation
sources outside the field of view. Such disturbances are difficult to quantify, however, in
most cases they are fortunately small enough to be neglected. In case they are not negligible, the measurement configuration is likely to be such that the risk for disturbance is
obvious, at least to a trained operator. It is then his responsibility to modify the measurement situation to avoid the disturbance e.g. by changing the viewing direction, shielding
off intense radiation sources etc.
Accepting the description above, we can use the figure below to derive a formula for the
calculation of the object temperature from the calibrated camera output.
Figure 20.1 A schematic representation of the general thermographic measurement situation.1: Surroundings; 2: Object; 3: Atmosphere; 4: Camera
Assume that the received radiation power W from a blackbody source of temperature
T
on short distance generates a camera output signal U
source
the power input (power linear camera). We can then write (Equation 1):
or, with simplified notation:
where C is a constant.
Should the source be a graybody with emittance ε, the received radiation would conse-
quently be εW
We are now ready to write the three collected radiation power terms:
1. Emission from the object = ετW
transmittance of the atmosphere. The object temperature is T
source
.
, where ε is the emittance of the object and τ is the
obj
that is proportional to
source
.
obj
#T810204; r. AA/43079/43091; en-US
86
20
The measurement formula
2. Reflected emission from ambient sources = (1 – ε)τW
tance of the object. The ambient sources have the temperature T
It has here been assumed that the temperature T
, where (1 – ε) is the reflec-
refl
.
refl
is the same for all emitting surfa-
refl
ces within the halfsphere seen from a point on the object surface. This is of course
sometimes a simplification of the true situation. It is, however, a necessary simplification in order to derive a workable formula, and T
can – at least theoretically – be giv-
refl
en a value that represents an efficient temperature of a complex surrounding.
Note also that we have assumed that the emittance for the surroundings = 1. This is
correct in accordance with Kirchhoff’s law: All radiation impinging on the surrounding
surfaces will eventually be absorbed by the same surfaces. Thus the emittance = 1.
(Note though that the latest discussion requires the complete sphere around the object to be considered.)
3. Emission from the atmosphere = (1 – τ)τW
mosphere. The temperature of the atmosphere is T
, where (1 – τ) is the emittance of the at-
atm
atm
.
The total received radiation power can now be written (Equation 2):
We multiply each term by the constant C of Equation 1 and replace the CW products by
the corresponding U according to the same equation, and get (Equation 3):
Solve Equation 3 for U
(Equation 4):
obj
This is the general measurement formula used in all the FLIR Systems thermographic
equipment. The voltages of the formula are:
Table 20.1 Voltages
U
obj
U
tot
U
refl
U
atm
Calculated camera output voltage for a blackbody of temperature
i.e. a voltage that can be directly converted into true requested
T
obj
object temperature.
Measured camera output voltage for the actual case.
Theoretical camera output voltage for a blackbody of temperature
T
according to the calibration.
refl
Theoretical camera output voltage for a blackbody of temperature
according to the calibration.
T
atm
The operator has to supply a number of parameter values for the calculation:
• the object emittance ε,
• the relative humidity,
• T
atm
• object distance (D
obj
)
• the (effective) temperature of the object surroundings, or the reflected ambient tem-
perature T
• the temperature of the atmosphere T
refl
, and
atm
This task could sometimes be a heavy burden for the operator since there are normally
no easy ways to find accurate values of emittance and atmospheric transmittance for the
actual case. The two temperatures are normally less of a problem provided the surroundings do not contain large and intense radiation sources.
A natural question in this connection is: How important is it to know the right values of
these parameters? It could though be of interest to get a feeling for this problem already
here by looking into some different measurement cases and compare the relative
#T810204; r. AA/43079/43091; en-US
87
20
The measurement formula
magnitudes of the three radiation terms. This will give indications about when it is important to use correct values of which parameters.
The figures below illustrates the relative magnitudes of the three radiation contributions
for three different object temperatures, two emittances, and two spectral ranges: SW and
LW. Remaining parameters have the following fixed values:
• τ = 0.88
• T
= +20°C (+68°F)
refl
• T
= +20°C (+68°F)
atm
It is obvious that measurement of low object temperatures are more critical than measuring high temperatures since the ‘disturbing’ radiation sources are relatively much stronger in the first case. Should also the object emittance be low, the situation would be still
more difficult.
We have finally to answer a question about the importance of being allowed to use the
calibration curve above the highest calibration point, what we call extrapolation. Imagine
that we in a certain case measure U
= 4.5 volts. The highest calibration point for the
tot
camera was in the order of 4.1 volts, a value unknown to the operator. Thus, even if the
object happened to be a blackbody, i.e. U
obj
= U
, we are actually performing extrapola-
tot
tion of the calibration curve when converting 4.5 volts into temperature.
Let us now assume that the object is not black, it has an emittance of 0.75, and the trans-
mittance is 0.92. We also assume that the two second terms of Equation 4 amount to 0.5
volts together. Computation of U
by means of Equation 4 then results in U
obj
obj
= 4.5 /
0.75 / 0.92 – 0.5 = 6.0. This is a rather extreme extrapolation, particularly when considering that the video amplifier might limit the output to 5 volts! Note, though, that the application of the calibration curve is a theoretical procedure where no electronic or other
limitations exist. We trust that if there had been no signal limitations in the camera, and if
it had been calibrated far beyond 5 volts, the resulting curve would have been very much
the same as our real curve extrapolated beyond 4.1 volts, provided the calibration algorithm is based on radiation physics, like the FLIR Systems algorithm. Of course there
must be a limit to such extrapolations.
14. Schuster, Norbert and Kolobrodov, Valentin G. Infrarotthermographie. Berlin: Wiley-
VCH, 2000.
Note The emissivity values in the table below are recorded using a shortwave (SW)
camera. The values should be regarded as recommendations only and used with
caution.