Baumer Baumer VisiLine IP User Manual

Baumer VisiLine IP
User's Guide for Gigabit Ethernet Cameras
2
3
Table of Contents
1. General Information ................................................................................................. 6
2. General safety instructions ..................................................................................... 7
3. Intended Use ............................................................................................................. 7
4. General Description ................................................................................................. 8
5. Camera Models ......................................................................................................... 9
6. Installation .............................................................................................................. 10
6.1 Environmental Requirements ................................................................................ 10
6.2 Heat Transmission ................................................................................................ 10
7. Pin-Assignment .......................................................................................................11
7.1 Power Supply and Digital IOs ................................................................................11
7.2 Ethernet Interface (PoE) ........................................................................................11
8. ProductSpecications .......................................................................................... 12
8.1 Spectral Sensitivity ................................................................................................ 12
8.2 Field of View Position ............................................................................................ 14
8.3 Acquisition Modes and Timings ............................................................................. 15
8.3.3 Trigger Mode .................................................................................................. 17
8.3.4 Advanced Timings for GigE Vision
8.4 Software ................................................................................................................ 23
8.4.1 Baumer GAPI ................................................................................................. 23
8.4.2 3
9. Camera Functionalities .......................................................................................... 24
9.1 Image Acquisition .................................................................................................. 24
9.1.1 Image Format ................................................................................................. 24
9.1.2 Pixel Format ................................................................................................... 25
9.1.3 Exposure Time................................................................................................ 27
9.1.7 Gamma Correction ......................................................................................... 30
9.1.8 Region of Interest ........................................................................................... 31
9.1.9 Binning............................................................................................................ 32
9.1.10 Brightness Correction (Binning Correction) .................................................. 33
9.1.11 Flip Image ..................................................................................................... 34
9.2 Color Processing ................................................................................................... 35
rd
Party Software ........................................................................................... 23
®
Message Channel .................................. 21
4
9.3 Color Adjustment – White Balance ....................................................................... 35
9.3.1  User-specic Color Adjustment ...................................................................... 35
9.3.2 One Push White Balance ............................................................................... 36
9.4 Analog Controls ..................................................................................................... 36
9.4.1 Offset / Black Level ......................................................................................... 36
9.4.2 Gain ................................................................................................................ 37
9.5 Pixel Correction ..................................................................................................... 38
9.5.3 Defectpixellist ................................................................................................. 39
9.6 Process Interface .................................................................................................. 40
9.6.1 Digital IOs ....................................................................................................... 40
9.6.2 IO Circuits ....................................................................................................... 41
9.6.4 Trigger Source ................................................................................................ 42
9.6.6 Flash Signal .................................................................................................... 43
9.6.7 Timers ............................................................................................................. 44
9.6.8 Frame Counter ............................................................................................... 44
9.7 Sequencer ............................................................................................................. 45
9.7.1 General Information ........................................................................................ 45
9.7.2 Baumer Optronic Sequencer in Camera xml-le ............................................ 46
9.7.3 Examples ........................................................................................................ 46
9.7.5 Double Shutter ............................................................................................... 48
9.8 Device Reset ......................................................................................................... 48
9.9 User Sets .............................................................................................................. 49
9.10 Factory Settings .................................................................................................. 49
9.11 Timestamp ........................................................................................................... 49
10. Interface Functionalities ........................................................................................ 50
10.1 Device Information .............................................................................................. 50
10.2 Baumer Image Info Header ................................................................................. 50
10.3 Packet Size and Maximum Transmission Unit (MTU) ......................................... 51
10.4 Inter Packet Gap ................................................................................................. 51
10.4.1 Example 1: Multi Camera Operation – Minimal IPG ..................................... 52
10.5 Transmission Delay ............................................................................................. 53
10.5.1 Time Saving in Multi-Camera Operation ...................................................... 53
10.5.2  Conguration Example ................................................................................. 54
10.6 Multicast .............................................................................................................. 56
10.7  IP Conguration .................................................................................................. 57
10.7.1 Persistent IP ................................................................................................. 57
10.7.2  DHCP (Dynamic Host Conguration Protocol) ............................................. 57
10.7.4 Force IP ........................................................................................................ 58
10.8 Packet Resend .................................................................................................... 59
10.8.4 Termination Conditions ................................................................................. 60
10.9 Message Channel ............................................................................................... 61
10.9.1 Event Generation ......................................................................................... 61
10.10 Action Command / Trigger over Ethernet .......................................................... 62
5
11. Start-Stop-Behaviour ............................................................................................. 63
11.1 Start / Stop / Abort Acquisition (Camera)............................................................. 63
11.2 Start / Stop Interface ........................................................................................... 63
11.3 Acquisition Modes ............................................................................................... 63
11.3.1 Free Running ................................................................................................ 63
11.3.3 Sequencer .................................................................................................... 63
12. Cleaning .................................................................................................................. 64
13. Transport / Storage ................................................................................................ 64
14. Disposal .................................................................................................................. 64
15. Warranty Notes ....................................................................................................... 65
16. Support .................................................................................................................... 65
17. Conformity .............................................................................................................. 66
17.1 CE ....................................................................................................................... 66
17.2 FCC – Class B Device ........................................................................................ 66
6
General Information1.
Pi ctogram
Thanks for purchasing a camera of the Baumer family. This User´s Guide describes how to connect, set up and use the camera.
Read this manual carefully and observe the notes and safety instructions!
Target group for this User´s Guide
This User's Guide is aimed at experienced users, which want to integrate camera(s) into a vision system.
Copyright
Any duplication or reprinting of this documentation, in whole or in part, and the reproduc-
tion of the illustrations even in modied form is permitted only with the written approval of 
Baumer. This document is subject to change without notice.
Classicationofthesafetyinstructions
In the User´s Guide, the safety instructions are classied as follows:
Notice
Gives helpful notes on operation or other general recommendations.
Caution
Indicates a possibly dangerous situation. If the situation is not avoided, slight or minor injury could result or the device may be damaged.
7
General safety instructions2.
Caution
Heat can damage the camera. Provide adequate dissipation of heat, to ensure that the temperatures does not exceed the value (see Heat Trans­mission).
As there are numerous possibilities for installation, Baumer does not
specify a specic method for proper heat dissipation. 
Intended Use3.
The camera is used to capture images that can be transferred over a GigE interface to a PC.
8
General Description4.
1
2
4
5
3
No. Description No. Description
1 Tube 4 Power supply / Digital-IO
2 C-Mount lens connection 5 Data- / PoE-Interface
3 LED´s
All Baumer Gigabit Ethernet cameras of the VisiLine IP family are characterized by:
Best image quality Low noise and structure-free image information
High quality mode with minimum noise
Flexible image acquisition Industrially compliant process interface with
parameter setting capability (trigger and ash)
Fast image transfer Reliable transmission up to 1000 Mbit/sec accord-
ing to IEEE802.3 Cable length up to 100 m PoE (Power over Ethernet) Baumer driver for high data volume with low CPU load High-speed multi-camera operation
▪ Gen<I>Cam™ and GigE Vision
Perfect integration Flexible generic programming interface ( Baumer-
GAPI) for all Baumer cameras Powerful Software Development Kit (SDK) with
sample codes and help les for simple integration
Baumer viewer for all camera functions
▪ Gen<I>Cam™ compliant XML le to describe the 
camera functions Supplied with installation program with automatic
Compact design Protection class IP 65/67
Reliable operation State-of-the-art camera electronics and precision
camera recognition for simple commissioning
Light weight exible assembly
mechanics Low power consumption and minimal heat genera- tion
®
compliant
9
Camera Models5.
2 - M3 depth 5
4 - M3 depth 5
2 - M3 depth 5
12,9 12,920,2
26
45,8 52,9 12,9
46
46
14 3
49,5ø
8,3
41,5 3,1
26
8,3
Camera Type
Sensor
Size
Resolution
CCD Sensor (monochrome / color)
VLG-02M.I / VLG-02C.I 1/4" 656 x 490 160
VLG-12M.I / VLG-12C.I 1/3" 1288 x 960 42
VLG-20M.I / VLG-20C.I 1/1.8" 1624 x 1228 27
CMOS Sensor (monochrome / color)
VLG-22M.I / VLG-22C.I 2/3" 2044 x 1084 55
VLG-40M.I / VLG-40C.I 1" 2044 x 2044 29
Dimensions
Full
Frames
[max. fps]
◄Figure1
Dimensions of a Baumer VisiLine IP camera
10
Installation6.
T
Lens mounting
Notice
Avoid contamination of the sensor and the lens by dust and airborne particles when mounting the support or the lens to the device!
Therefore the following points are very important:
Install the camera in an environment that is as dust free as possible! Keep the dust cover (bag) on camera as long as possible! Hold the print with the sensor downwards with unprotected sensor. Avoid contact with any optical surface of the camera!
Environmental Requirements6.1
Temperature
Storage temperature -10°C ... +70°C ( +14°F ... +158°F)
Operating temperature* see Heat Transmission
* If the environmental temperature exceeds the values listed in the table below, the cam­era must be cooled. (see Heat Transmission)
Figure2►
Temperature measuring point
Humidity
Storage and Operating Humidity 10% ... 90%
Non-condensing
Heat Transmission6.2
Caution
Heat can damage the camera. Provide adequate dissipation of heat, to ensure that the temperature does not exceed 50°C (122°F).
As there are numerous possibilities for installation, Baumer does not
specify a specic method for proper heat dissipation. 
Measure Point Maximal Temperature
T 50°C (122°F)
11
7. Pin-Assignment
1
2
3
4
5
6
7
8
2
1
8
7
6
5
4
3
1
2
7.1 Power Supply and Digital IOs
Power supply / Digital-IO
(SACC-CI-M12MS-8CON-SH TOR 32)
wire colors of the connecting cable
1 OUT 3 white 7 Power GND blue
2 Power Vcc+ brown 8 OUT 2 red
3 IN 1 green
4 IO GND yellow
5 U
OUT grey
ext
6 OUT 1 pink
Notice
The electrical data are available in the respective data sheet.
Ethernet 7.2 Interface (PoE)
Notice
The VisiLine IP supports PoE (Power over Ethernet) IEEE 802.3af Clause 33, 48V Power supply.
Ethernet
(SACC-CI-M12FS-8CON-L180-10G)
1 MX1+ white
2 MX1- brown
3 MX2+ green
4 MX2- yellow
5 MX4+ grey
6 MX4- pink
7 MX3- blue
8 MX3+ red
7.2.1 LED Signaling
◄Figure3
LED positions on Baum-
LED Signal Meaning
1
2 yellow Transmitting
green Link active
green ash Receiving
er VisiLine cameras.
12
400 500 600 700 800 900 1000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-02M.I
400 450 500 550 600 650 700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-02C.I
400 500 600 700 800 900 1000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-12M.I
400 450 500 550 600 650 700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-12C.I
400 500 600 700 800 900 1000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-20M.I
400 450 500 550 600 650 700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-20C.I
Figure4►
Spectral sensitivities for Baumer cameras with
0.3 MP CCD sensor.
8. ProductSpecications
8.1 Spectral Sensitivity
The spectral sensitivity characteristics of monochrome and color matrix sensors for Visi­Line IP cameras are displayed in the following graphs. The characteristic curves for the
sensors  do  not  take  the  characteristics  of  lenses  and  light  sources  without  lters  into 
consideration.
Values relating to the respective technical data sheets of the sensors.
Spectral sensitivities for Baumer cameras with 1,2 MP CCD sensor.
Spectral sensitivities for Baumer cameras with
2.0 MP CCD sensor.
Figure5►
Figure6►
13
350 450 550 650 750 850 950 1050
Wave Length [nm]
Quantum Efficiency [%]
VLG-22M.I / VLG-40M.I
350 450 550 650 750 850 950 1050
Wave Length [nm]
Quantum Efficiency [%]
VLG-22C.I / VLG-40C.I
◄Figure7
Spectral sensitivities for Baumer cameras with 5.0, 4,0 MP CMOS sensor.
14
8.2 Field of View Position
photosensitive surface of the sensor
cover glas of sensor thickness: D
front cover glass thickness: 1 ± 0.1 mm
optical path c-mount (17.526 mm)
0 5
0,5
± α
±XR
±YR
±XM
±YM
± Z
A
14,6
60,2
7,2
15,6
The typical accuracy by assumption of the root mean square value is displayed in the
gure and the table below:
Sensor accuracy of the Baumer VisiLine IP
Figure8►
Camera
Type
± XM
[mm]
± YM
[mm]
± XR
[mm]
± YR
[mm]
± z
Zyp
[mm]
± α
typ
[°]
VLG.I-02* 0.09 0.09 0.09 0.09 0.025 0.7 16.1 0.75
VLG.I-12* 0.06 0.06 0.06 0.06 0.025 0.7 16.6 0.5
VLG.I-20* 0.06 0.06 0.06 0.06 0.025 0.7 16.6 0.5
VLG.I-22* 0.07 0,07 0.07 0.07 0.025 0,5 16.2 0.55 ± 0.05
VLG.I-40* 0.07 0,07 0.07 0.07 0.025 0,5 16.2 0.55 ± 0.05
typical accuracy by assumption of the root mean square value * C or M ** Dimension D in this table is from manufacturer datasheet (edition 06/2012)
A
[mm]
D**
[mm]
15
8.3 Acquisition Modes and Timings
Exposure
Readout
Exposure
Readout
Exposure
Readout
Flash
t
exposure(n)
t
flash(n)
t
flashdelay
t
flash(n+1)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
The image acquisition consists of two separate, successively processed components.
Exposing the pixels on the photosensitive surface of the sensor is only the rst part of the  image acquisition. After completion of the rst step, the pixels are read out.
Thereby the exposure time (t ed for the readout (t
) is given by the particular sensor and image format.
readout
) can be adjusted by the user, however, the time need-
exposure
Baumer cameras can be operated with three modes, the Free Running Mode, the Fixed- Frame-Rate Mode and the Trigger Mode.
*)
The cameras can be operated non-overlapped
or overlapped. Depending on the mode
used, and the combination of exposure and readout time:
Non-overlapped Operation Overlapped Operation
Here the time intervals are long enough to process exposure and readout succes­sively.
In this operation the exposure of a frame (n+1) takes place during the readout of frame (n).
8.3.1 Free Running Mode
In the "Free Running" mode the camera records images permanently and sends them to the PC. In order to achieve an optimal result (with regard to the adjusted exposure time t
and image format) the camera is operated overlapped.
exposure
In case of exposure times equal to / less than the readout time (t
exposure
 ≤ t
mum frame rate is provided for the image format used. For longer exposure times the frame rate of the camera is reduced.
), the maxi-
readout
t
ash
= t
exposure
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
Image parameters:
Offset
Gain Mode Partial Scan
*) Non-overlapped means the same as sequential.
16
Fixed-Frame-Rate Mode8.3.2
With this feature Baumer introduces a clever technique to the VisiLine IP camera series,
that enables the user to predene a desired frame rate in continous mode.
For the employment of this mode the cameras are equipped with an internal clock genera­tor that creates trigger pulses.
Notice
From a certain frame rate, skipping internal triggers is unavoidable. In general, this de­pends on the combination of adjusted frame rate, exposure and readout times.
17
8.3.3 Trigger Mode
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
After a  specied external event (trigger) has occurred, image acquisition is started. De­pending on the interval of triggers used, the camera operates non-overlapped or over­lapped in this mode.
With regard to timings in the trigger mode, the following basic formulas need to be taken into consideration:
Case Formula
t
exposure
t
exposure
< t
> t
readout
readout
(1) t
(2) t
(3) t
(4) t
earliestpossibletrigger(n+1)
notready(n+1)
earliestpossibletrigger(n+1)
notready(n+1)
= t
exposure(n)
= t
exposure(n)
= t
readout(n)
+ t
= t
exposure(n)
- t
readout(n)
exposure(n+1)
- t
exposure(n+1)
8.3.3.1 Overlapped Operation: t
exposure(n+2)
= t
exposure(n+1)
In overlapped operation attention should be paid to the time interval where the camera is unable to process occuring trigger signals (t exposures. When this process time t
notready
). This interval is situated between two
notready
has elapsed, the camera is able to react to
external events again.
After t age (t
has elapsed, the timing of (E) depends on the readout time of the current im-
notready
) and exposure time of the next image (t
readout(n)
exposure(n+1)
). It can be determined by the
formulas mentioned above (no. 1 or 3, as is the case).
In case of identical exposure times, t
remains the same from acquisition to acquisi-
notready
tion.
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective E - earliest possible trigger
Image parameters:
Offset
Gain Mode Partial Scan
18
8.3.3.2 Overlapped Operation: t
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
exposure(n+2)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
exposure(n+2)
> t
exposure(n+1)
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective E - earliest possible trigger
If the exposure time (t tion, the time the camera is unable to process occurring trigger signals (t
) is increased from the current acquisition to the next acquisi-
exposure
notready
) is scaled
down.
This can be simulated with the formulas mentioned above (no. 2 or 4, as is the case).
Image parameters:
Offset
Gain Mode Partial Scan
19
8.3.3.3 Overlapped Operation: t
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
exposure(n+2
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
exposure(n+2)
< t
exposure(n+1)
If the exposure time (t tion, the time the camera is unable to process occurring trigger signals (t
) is decreased from the current acquisition to the next acquisi-
exposure
notready
) is scaled
up.
When decreasing the t
exposure
such, that t
exceeds the pause between two incoming
notready
trigger signals, the camera is unable to process this trigger and the acquisition of the im­age will not start (the trigger will be skipped).
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
E - earliest possible trigger F - frame not started / trigger skipped
Notice
From a certain frequency of the trigger signal, skipping triggers is unavoidable. In gen­eral, this frequency depends on the combination of exposure and readout times.
Image parameters:
Offset
Gain Mode Partial Scan
20
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective E - earliest possible trigger
8.3.3.4 Non-overlapped Operation
If the frequency of the trigger signal is selected for long enough, so that the image acquisi­tions (t
exposure
+ t
) run successively, the camera operates non-overlapped.
readout
Image parameters:
Offset
Gain Mode Partial Scan
21
Advanced Timings for 8.3.4 GigE Vision® Message Channel
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
TriggerReady
t
notready
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
TriggerReady
t
notready
TriggerSkipped
The following charts show some timings for the event signaling by the asynchronous message channel. Vendor-specic  events like "TriggerReady",  "TriggerSkipped", "Trig­gerOverlapped" and "ReadoutActive" are explained.
8.3.4.1 TriggerReady
This event signals whether the camera is able to process incoming trigger signals or not.
8.3.4.2 TriggerSkipped
If the camera is unable to process incoming trigger signals, which means the camera should be triggered within the interval t Line IP cameras the user will be informed about this fact by means of the event "Trigger­Skipped".
, these triggers are skipped. On Baumer Visi-
notready
22
8.3.4.3 TriggerOverlapped
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
Trigger Overlapped
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
Readout Active
This signal is active, as long as the sensor is exposed and read out at the same time. which means the camera is operated overlapped.
Once a valid trigger signal occures not within a readout, the "TriggerOverlapped" signal changes to state low.
8.3.4.4 ReadoutActive
While the sensor is read out, the camera signals this by means of "ReadoutActive".
23
8.4 Software
8.4.1 Baumer GAPI
Baumer GAPI stands for Baumer “Generic Application Programming Interface”. With this API Baumer provides an interface for optimal integration and control of Baumer cameras. This software interface allows changing to other camera models.
It provides interfaces to several programming languages, such as C, C++ and the .NET™
®
Framework on Windows
, as well as Mono on Linux® operating systems, which offers the
use of other languages, such as e.g. C# or VB.NET.
8.4.2 3rd Party Software
Strict compliance with the Gen<I>Cam™ standard allows Baumer to offer the use of 3rd Party Software for operation with cameras of the VisiLine IP family.
rd
You can nd a current listing of 3 bination with Baumer cameras,  at  http://www.baumer.com/de-en/produkte/identication-
image-processing/software-and-starter-kits/third-party-software/
Party Software, which was tested successfully in com-
24
Camera 9. Functionalities
Image 9.1 Acquisition
9.1.1 Image Format
A digital camera usually delivers image data in at least one format - the native resolution of the sensor. Baumer cameras are able to provide several image formats (depending on the type of camera).
Compared with standard cameras, the image format on Baumer cameras not only in-
cludes resolution, but a set of predened parameter.
These parameters are:
▪ Resolution (horizontal and vertical dimensions in pixels) ▪ Binning Mode
Camera Type
Monochrome
VLG-02M.I
VLG-12M.I
VLG-20M.I
VLG-22M.I
VLG-40M.I
Color
VLG-02C.I
VLG-12C.I
VLG-20C.I
VLG-22C.I
VLG-40C.I
Full frame
Binning 2x2
Binning 1x2
Binning 2x1
25
9.1.2 Pixel Format
Red
Green
Blue
Black
White
On Baumer digital cameras the pixel format depends on the selected image format.
Denitions9.1.2.1
RAW: Raw data format. Here the data are stored without processing.
Bayer: Raw data format of color sensors.
Color lters are placed on these sensors in a checkerboard pattern, generally 
in a 50% green, 25% red and 25% blue array.
Mono: Monochrome. The color range of mono images consists of shades of a single
color. In general, shades of gray or black-and-white are synonyms for mono­chrome.
RGB: Color model, in  which  all detectable colors  are  dened by three  coordinates, 
Red, Green and Blue.
◄Figure9
Sensor with Bayer Pattern
The three coordinates are displayed within the buffer in the order R, G, B.
BGR: Here the color alignment mirrors RGB.
YUV: Color model, which is used in the PAL TV standard and in image compression.
In YUV, a high bandwidth luminance signal (Y: luma information) is transmitted together with two color difference signals with low bandwidth (U and V: chroma information). Thereby U represents the difference between blue and luminance (U = B - Y), V is the difference between red and luminance (V = R - Y). The third color, green, does not need to be transmitted, its value can be calculated from the other three values.
YUV 4:4:4 Here each of the three components has the same sample rate.
Therefore there is no subsampling here.
YUV 4:2:2 The chroma components are sampled at half the sample rate.
This reduces the necessary bandwidth to two-thirds (in relation to 4:4:4) and causes no, or low visual differences.
YUV 4:1:1 Here the chroma components are sampled at a quarter of the
sample rate.This decreases the necessary bandwith by half (in relation to 4:4:4).
◄Figure10
RBG color space dis­played as color tube.
26
Byte 1 Byte 2 Byte 3
Byte 1 Byte 2
unused bits
Byte 1 Byte 2 Byte 3
Pixel 0Pixel 1
Figure11►
Bit string of Mono 8 bit and RGB 8 bit.
Figure12►
Spreading of Mono 12 bit over two bytes.
Figure13►
Spreading of two pixels in Mono 12 bit over three bytes (packed mode).
Pixel depth: In general,  pixel  depth denes  the  number of possible  different  values for 
each color channel. Mostly this will be 8 bit, which means 28 different "col­ors".
For RGB or BGR these 8 bits per channel equal 24 bits overall.
Two bytes are needed for transmitting more than 8 bits per pixel - even if the
second byte is not  completely  lled  with data.  In order  to  save bandwidth, 
the packed formats were introduced to Baumer VisiLine IP cameras. In this
formats, the unused bits of one pixel are lled with data from the next pixel.
8 bit:
12 bit:
Packed:
Pixel Formats on Baumer VisiLine IP Cameras9.1.2.2
Camera Type
Monochrome
VLG-02M.I
VLG-12M.I
VLG-20M.I
VLG-22M.I
VLG-40M.I
Color
VLG-02C.I
VLG-12C.I
VLG-20C.I
VLG-22C.I
VLG-40C.I
Mono 8
Mono 12
Mono 12 Packed
Bayer RG 8
Bayer RG 10
Bayer RG 12
RGB 8
BGR 8
YUV8_UYV
YUV422_8_UYVY
YUV411_8_UYYVYY
27
9.1.3 Exposure Time
Light
Photon
Pixel
Charge Carrie
r
On exposure of the sensor, the inclination of photons produces a charge separation on the semiconductors of the pixels. This results in a voltage difference, which is used for signal extraction.
The signal strength is inuenced by the incoming amount of photons. It can be increased 
by increasing the exposure time (t
On Baumer VisiLine IP cameras, the exposure time can be set within the following ranges
(step size 1μsec):
exposure
).
◄Figure14
Incidence of light causes charge separation on the semiconductors of the sensor.
Camera Type t
min t
exposure
exposure
Monochrome
VLG-02M.I 4 μsec 60 sec
VLG-12M.I 4 μsec 60 sec
VLG-20M.I 4 μsec 60 sec
VLG-22M.I 15 μsec 1 sec
VLG-40M.I 20 μsec 1 sec
Color
VLG-02C.I 4 μsec 60 sec
VLG-12C.I 4 μsec 60 sec
VLG-20C.I 4 μsec 60 sec
VLG-22C.I 15 μsec 1 sec
VLG-40C.I 20 μsec 1 sec
max
28
PRNU / DSNU Correction (FPN - Fixed Pattern Noise)9.1.4
Camera Type
PRNU/
DSNU
CCD
VLG-02M.I / VLG-02C.I
VLG-12M.I / VLG-12C.I
VLG-20M.I / VLG-20C.I
CMOS
VLG-22M.I / VLG-22C.I
VLG-40M.I / VLG-40C.I
CMOS  sensors  exhibit  nonuniformities  that are  often called  xed pattern  noise (FPN).  However it is no noise but a xed variation from pixel to pixel that can be corrected. The 
advantage of using this correction is a more homogeneous picture which may simplify the image analysis. Variations from pixel to pixel of the dark signal are called dark signal non­uniformity (DSNU) whereas photo response nonuniformity (PRNU) describes variations of the sensitivity. DNSU is corrected via an offset while PRNU is corrected by a factor.
The correction is based on columns. It is important that the correction values are comput-
ed for the used sensor readout conguration. During camera production this is derived for 
the factory defaults. If other settings are used (e.g. different number of readout channels) using this correction with the default data set may degrade the image quality. In this case
the user may derive a specic data set for the used setup.
PRNU / DSNU Correction Off PRNU / DSNU Correction On
29
HDR (High Dynamic Range)9.1.5
L
ow Illumination
High
Illumination
Pot
0
Pot
1
Pot
2
t
Expo0
t
Expo1tExpo2
t
exposure
Sensor Output
Camera Type
HDR
CCD
VLG-02M.I / VLG-02C.I
VLG-12M.I / VLG-12C.I
VLG-20M.I / VLG-20C.I
CMOS
VLG-22M.I / VLG-22C.I
VLG-40M.I / VLG-40C.I
Beside the standard linear response the sensor support a special high dynamic range mode (HDR) called piecewise linear response. With this mode illuminated pixels that reach a certain programmable voltage level will be clipped. Darker pixels that do not reach this threshold remain unchanged. The clipping can be adjusted two times within a single
exposure by conguring  the  respective  time slices  and clipping voltage levels. See  the  gure below for details.
In this mode, the values for t
The value for t t
)
Expo1
will be calculated automatically in the camera. (t
Expo2
Expo0
, t
, Pot0 and Pot1can be edited.
Expo1
Expo2
= t
exposure
- t
Expo0
-
HDR Off HDR On
30
Y' = Y
original
γ
H
E0
▲Figure15
Non-linear perception of the human eye. H - Perception of bright­ ness E - Energy of light
9.1.6 Look-Up-Table
The Look-Up-Table (LUT) is employed on Baumer VisiLine IP monochrome and color cameras. It contains 212 (4096) values for the available levels. These values can be ad­justed by the user.
9.1.7 Gamma Correction
With this feature, Baumer VisiLine IP cameras offer the possibility of compensating non­linearity in the perception of light by the human eye.
For this correction, the corrected pixel intensity (Y') is calculated from the original intensity of the sensor's pixel (Y
simplied version):
On Baumer VisiLine IP cameras the correction factor γ is adjustable from 0.001 to 2.
The values of the calculated intensities are entered into the Look-Up-Table (see 9.1.5). Thereby previously existing values within the LUT will be overwritten.
Notice
If the LUT feature is disabled on the software side, the gamma correction feature is disabled, too.
) and correction factor γ using the following formula (in over-
original
31
Region of Interest9.1.8
Start ROI
End ROI
Readout lines
With the "Region of Interest" (ROI) function it is possible to predene a so-called Region  of Interest (ROI) or Partial Scan. This ROI is an area of pixels of the sensor. On image acquisition, only the information of these pixels is sent to the PC. Therefore, not all lines of the sensor are read out, which decreases the readout time (t
). This increases the
readout
frame rate.
This function is employed, when only a region of the eld of view is of interest. It is coupled 
to a reduction in resolution.
The ROI is specied by four values:
▪ Offset X  - x-coordinate of the rst relevant pixel ▪ Offset Y  - y-coordinate of the rst relevant pixel ▪ Size X - horizontal size of the ROI ▪ Size Y - vertical size of the ROI
9.1.8.1 ROI
ROI Readout
In the illustration below, readout time would be decreased to 40%, compared to a full frame readout.
◄Figure16
ROI: Parameters
◄Figure17
Decrease in readout time by using partial scan.
32
Binning9.1.9
On digital  cameras, you can  nd several  operations  for progressing  sensitivity. One  of 
them is the so-called "Binning". Here, the charge carriers of neighboring pixels are aggre­gated. Thus, the progression is greatly increased by the amount of binned pixels. By using this operation, the progression in sensitivity is coupled to a reduction in resolution.
Baumer cameras support three types of Binning - vertical, horizontal and bidirectional.
In unidirectional binning, vertically or horizontally neighboring pixels are aggregated and reported to the software as one single "superpixel".
In bidirectional binning, a square of neighboring pixels is aggregated.
Binning Illustration Example
Figure18►
Full frame image, no binning of pixels.
Figure19►
Vertical binning causes a vertically compressed image with doubled brightness.
Figure20►
Horizontal binning causes a horizontally compressed image with doubled brightness.
without
1x2
2x1
Figure21►
Bidirectional binning causes both a hori­zontally and vertically compressed image with quadruple brightness.
2x2
33
9.1.10 Brightness Correction (Binning Correction)
Charge quantity
Binning 2x2
Super pixel
To tal charge quantity of the 4 aggregated pixels
The aggregation of charge carriers may cause an overload. To prevent this, binning cor­rection was introduced. Here, three binning modes need to be considered separately:
Binninig Realization
1x2 1x2 binning is performed within the sensor, binning correction also takes
place here. A possible overload is prevented by halving the exposure time.
2x1 2x1 binning takes place within the FPGA of the camera. The binning cor-
rection is realized by aggregating the charge quantities, and then halving this sum.
2x2 2x2 binning is a combination of the above versions.
◄Figure22
Aggregation of charge carriers from four pixels in bidirectional binning.
34
Flip Image9.1.11
The Flip Image function let you ip the captured images horizontal and/or vertical before 
they are transmitted from the camera.
Notice
A dened ROI will also ipped.
Figure23►
Flip image vertical
Camera Type
Horizontal
VLG-02M.I / VLG-02C.I
VLG-12M.I / VLG-12C.I
VLG-20M.I / VLG-20C.I
VLG-22M.I / VLG-22C.I
VLG-40M.I / VLG-40C.I
Normal Flip vertical
Normal Flip horizontal
Vertical
Figure24►
Flip image horiontal
Figure25►
Flip image horiontal and vertical
Normal Flip horizontal and vertical
35
9.2 Color Processing
Camera
Module
Bayer
Processor
Color
Transfor
mation
RGB
r
g
b
r'
g'
b'
r''
b''
g''
Y
White balance
non-adjusted
histogramm
histogramm after
user-specific
color adjustment
Baumer color cameras are balanced to a color temperature of 5000 K.
Oversimplied, color processing is realized by 4 modules.
The color  signals  r (red), g (green) and b  (blue) of the sensor are amplied in  total and 
digitized within the camera module.
Within the Bayer processor, the raw signals r', g' and b' are amplied by using of indepen- dent factors for each color channel. Then the missing color values are interpolated, which results in new color values (r'', g'', b''). The luminance signal Y is also generated.
The next step is the color transformation. Here the previously generated color signals r'', g'' and b'' are converted to the chroma signals U and V, which conform to the standard. Afterwards theses signals are transformed into the desired output format. Thereby the following steps are processed simultaneously:
Transformation to color space RGB or YUV
▪ External color adjustment
Color adjustment as physical balance of the spectral sensitivities
◄Figure26
Color processing mod­ules of Baumer color cameras.
In order to reduce the data rate of YUV signals, a subsampling of the chroma signals can be carried out. Here the following items can be customized to the desired output format:
Order of data output Subsampling of the chroma components to YUV 4:2:2 or YUV 4:1:1 Limitation of the data rate to 8 bits
9.3 Color Adjustment – White Balance
This feature is available on all color cameras of the Baumer VisiLine IP series and takes place within the Bayer processor.
White balance means independent adjustment of the three color channels, red, green and blue by employing of a correction factor for each channel.
User-specic9.3.1 Color Adjustment
The user-specic color adjustment in Baumer color cameras facilitates adjustment of the  correction factors  for each color gain. This way, the user is  able to adjust the amplica-
tion of each color channel exactly to his needs. The correction factors for the color gains range from 1 to 4.
◄Figure27
Examples of histo­gramms for a non­adjusted image and for an image after user-
specic white balance..
36
non-adjusted
histogramm
histogramm after
„one push“ white
balance
Figure28►
Examples of histo­gramms for a non-ad­justed image and for an image after "one push" white balance.
One Push 9.3.2 White Balance
Here, the three color spectrums are balanced to a single white point. The correction fac­tors of the color gains are determined by the camera (one time).
9.4 Analog Controls
9.4.1 Offset / Black Level
On Baumer VisiLine IP cameras, the offset (or black level) is adjustable from 0 to 255 LSB (relating to 12 bit).
Camera Type Step Size 1 LSB
Relating to
Monochrome
VLG-02M.I / VLG-02C.I 12 bit
VLG-12M.I / VLG-12C.I 12 bit
VLG-20M.I / VLG-20C.I 12 bit
Color
VLG-22M.I / VLG-22C.I 12 bit
VLG-40M.I / VLG-40C.I 12 bit
37
9.4.2 Gain
In industrial environments motion blur is unacceptable. Due to this fact exposure times are limited. However, this causes low output signals from the camera and results in dark
images. To solve  this  issue, the signals  can  be amplied by  a  user-dened gain  factor 
within the camera. This gain factor is adjustable.
Notice
Increasing the gain factor causes an increase of image noise.
CCD Sensor
Camera Type Gain factor [db]
Monochrome
VLG-02M.I 0...26
VLG-12M.I 0...26
VLG-20M.I 0...26
Color
VLG-02C.I 0...26
VLG-12C.I 0...26
VLG-20C.I 0...26
CMOS Sensor
Camera Type Gain factor [db]
Monochrome
VLG-22M.I 0...18
VLG-40M.I 0...18
Color
VLG-22C.I 0...18
VLG-40C.I 0...18
38
Warm Pixel
Cold Pixel
Charge quantity
„Normal Pixel“
Charge quantity „Cold Pixel“
Charge quantity „Warm Pixel“
Figure29►
Distinction of "hot" and "cold" pixels within the recorded image.
Pixel Correction9.5
General information9.5.1
A certain probability for abnormal pixels - the so-called defect pixels - applies to the sen­sors of all manufacturers. The charge quantity on these pixels is not linear-dependent on the exposure time.
The occurrence of these defect pixels is unavoidable and intrinsic to the manufacturing and aging process of the sensors.
The operation of the camera is not affected by these pixels. They only appear as brighter (warm pixel) or darker (cold pixel) spot in the recorded image.
Figure30►
Charge quantity of "hot" and "cold" pixels com­pared with "normal" pixels.
39
Correction Algorithm9.5.2
Defect Pixels
Average Value
Corrected Pixels
On cameras of the Baumer VisiLine IP series, the problem of defect pixels is solved as follows:
Possible defect pixels are identied during the production process of the camera.  The coordinates of these pixels are stored in the factory settings of the camera.
Once the sensor readout is completed, correction takes place:
Before any other processing, the values of the neighboring pixels on the left and the right side of the defect pixels, will be read out. (within the same bayer phase for color) Then the average value of these 2 pixels is determined to correct the rst defect  pixel Finally, the value of the second defect pixel is is corrected by using the previously corrected pixel and the pixel of the other side of the defect pixel. The correction is able to correct up to two neighboring defect pixels.
9.5.3 Defectpixellist
As stated previously, this list is determined within the production process of Baumer cam­eras and stored in the factory settings.
Additional hot or cold pixels can develop during the lifecycle of a camera. In this case Baumer offers the possibility of adding their coordinates to the defectpixellist.
*)
The user can determine the coordinates Once the defect pixel list is stored in a user set, pixel correction is executed for all coordi­nates on the defectpixellist.
of the affected pixels and add them to the list.
*)  Position in relation to Full Frame Format (Raw Data Format / No ipping).
40
9.6 Process Interface
(Input) Line 1
Trigger Timer
LineOut 1
LineOut 2
LineOut 3
state high
state low
IO Matrix
state selection
(inverter)
signal selection
(software side)
1
2
3
4
5
6
7
8
9.6.1 Digital IOs
UserDenableInputs9.6.1.1
The wiring of these input connectors is left to the user.
Sole exception is the compliance with predetermined high and low levels (0 .. 4,5V low, 11 .. 30V high).
The dened signals will have no direct effect, but can be analyzed and processed on the 
software side and used for controlling the camera.
The employment of a so called "IO matrix" offers the possibility of selecting the signal and the state to be processed.
On the software side the input signals are named "Trigger", "Timer" and "LineOut 1..3".
Figure31►
IO matrix of the Baumer VisiLine on in­put side.
41
Congurable9.6.1.2 Outputs
(Output) Line 1
state high
state low
(Output) Line 2
state high
state low
(Output) Line 3
state high
state low
IO Matrix
state selection
(inverter)
signal selection
(software side)
O Line0
Tr iggerReady Tr iggerOverlapped Tr iggerSkipped ExposureActive ReadoutActive
UserOutput0 UserOutput1 UserOutput2 Timer1Active Timer2Active Timer3Active SequencerOutput0 SequencerOutput1 SequencerOutput2
User defined Signals nternal Signals
Loopthroughed
Signals
1
2
3
4
5
6
7
8
Camera Customer Device
IO Power V
CC
U
ext
Pin
R
L
I
OUT
IO GND
Out (n) Pin
Camera Customer Device
IO Power V
CC
R
L
I
OUT
IO GND
Out
U
ext
Pin (Out1, 2, 3)
Out1 or Out2 or Out3
CameraCustomer Device
IO GND
DRV
IN1 Pin
IN GND Pin
With this feature, Baumer offers the possibility of wiring the output connectors to internal signals, which are controlled on the software side.
Hereby on VisiLine IP cameras, the output connector can be wired to one of provided internal signal: "Off", "ExposureActive", "Line 0", "Timer 1 … 3", "ReadoutActive", "User0 … 2", "TriggerReady", "TriggerOverlapped", "TriggerSkipped", "Sequencer Output 0 ... 2". Beside this, the output can be disabled.
9.6.2 IO Circuits
Notice
Low Active: At this wiring, only one consumer can be connected. When all Output pins
(1, 2, 3) connected to IO_GND, then current ows through the resistor as soon as one 
Output is switched. If only one output connected to IO_GND, then this one is only us­able.
The other two outputs are not usable and may not be connected (e.g. IO Power V
Output high active Output low active Input
◄Figure32
IO matrix of the Baumer VisiLine IP on output side.
)!
CC
42
Trigger (valid)
Exposure
Readout
Time
A
B
C
p
h
o
t
o
e
l
e
c
t
r
i
c
s
e
n
s
o
r
t
r
i
g
g
e
r
s
i
g
n
a
l
p
r
o
g
r
a
m
m
a
b
l
e
l
o
g
i
c
c
o
n
t
r
o
l
e
r
o
t
h
e
r
s
s
o
f
t
w
a
r
e
t
r
i
g
g
e
r
H
a
r
d
w
a
r
e
t
r
i
g
g
e
r
b
r
o
a
d
c
a
s
t
▲Figure33
high
low
U
t0
4.5V
11V
30V
Trigger signal, valid for Baumer cameras.
Figure34►
Camera in trigger mode: A - Trigger delay B - Exposure time C - Readout time
9.6.3 Trigger
Trigger signals are used to synchronize the camera exposure and a machine cycle or, in case of a software trigger, to take images at predened time intervals.
Different trigger sources can be used here.
Trigger Delay:
The trigger delay is a exible user-dened delay between the given trigger impulse and the image cap­ture. The delay time can be set between 0.0 μsec and 2.0 sec with a stepsize of 1 μsec. In the case of multiple triggers during the delay the triggers will be stored and delayed, too. The buffer is able to store
up to 512 trigger
signals during the delay.
Your benets:
No need for a perfect
alignment of an external
trigger sensor
Different objects can be
captured without hardware
changes
9.6.4 Trigger Source
Examples of possible trigger sources.
Figure35►
Each trigger source has to be activated separately. When the trigger mode is activated, the hardware trigger is activated by default.
43
9.6.5 Debouncer
low
high
U
t0
4.5V
11V
30V
low
high
U
t0
4.5V
11V
30V
t
∆t
1
∆tx high time of the signal t
DebounceHigh
user defined debouncer delay for state high
t
DebounceLow
user defined debouncer delay for state low
t
DebounceHigh
∆t
2
∆t
3
∆t4∆t
5
∆t
6
t
DebounceLow
Incoming signals (valid and invalid)
Debouncer
Filtered signal
The basic idea behind this feature was to seperate interfering signals (short peaks) from valid square wave signals, which can be important in industrial environments. Debouncing
means that invalid signals are ltered out, and signals lasting longer than a user-dened 
testing time t
DebounceHigh
In order to detect the end of a valid signal and lter out possible jitters within the signal, a 
second testing time t If the signal value falls to state low and does not rise within t as end of the signal.
will be recognized, and routed to the camera to induce a trigger.
DebounceLow
was introduced. This timing is also adjustable by the user.
DebounceLow
, this is recognized
Debouncer:
The debouncing times t
of 1 μsec.
DebounceHigh
and t
DebounceLow
are adjustable from 0 to 5 msec in steps
Please note that the edges of valid trigger signals are shifted by t
t
DebounceLow
DebounceHigh
!
and
Depending on these two timings, the trigger signal might be temporally stretched or compressed.
9.6.6 Flash Signal
This signal is managed by exposure of the sensor.
Furthermore, the falling edge of the ash output signal can be used to trigger a movement of the inspected objects. Due to this fact, the span time used for the sensor readout t can be used optimally in industrial environments.
readout
44
Exposure
Timer
t
exposure
t
triggerdelay
Trigger
t
TimerDuration
t
TimerDelay
Figure37►
Possible Timer con-
guration  on  a  Baumer 
VisiLine
Timers9.6.7
Timers were introduced for advanced control of internal camera signals.
For example the employment of a timer allows you to control the ash signal in that way,  that the illumination does not start synchronized to the sensor exposure but a predened 
interval earlier.
On Baumer VisiLine IP cameras the timer conguration includes four components:
Component Description
TimerTriggerSource This feature provides a source selection for each timer.
TimerTriggerActivation This feature selects that part of the trigger signal (edges or
states) that activates the timer.
TimerDelay This feature represents the interval between incoming trigger
signal and the start of the timer.
TimerDuration By this feature the activation time of the timer is adjustable.
Flash Delay9.6.7.1
As previously stated, the Timer feature can be used to start the connected illumination earlier than the sensor exposure.
This implies a timer conguration as follows:
The ash output needs to be wired to the selected internal Timer signal.
Trigger source and trigger activation for the Timer need to be the same as for the sensor exposure. The TimerDelay feature (t delay (t The duration (t
triggerdelay
).
TimerDuration
) of the timer signal should last until the exposure of the sensor
) needs to be set to a lower value than the trigger
TimerDelay
is completed. This can be realized by using the following formula:
t
TimerDuration
= (t
triggerdelay
– t
TimerDelay
) + t
exposure
9.6.8 Frame Counter
The frame counter is part of the Baumer image infoheader and supplied with every image, if the chunkmode is activated. It is generated by hardware and can be used to verify that every image of the camera is transmitted to the PC and received in the right order.
45
9.7 Sequencer
A
A
B
B
C
C
m
o
z
n = 1
n = 2
n = 3
n = 4
n = 5
n = 1
n = 2
n = 3
n = 1n = 2
ABC
z = 2z = 2z = 2z = 2z = 2
General Information9.7.1
A sequencer is used for the automated control of series of images using different sets of parameters.
◄Figure38
Flow chart of
sequencer. m - number of loop passes n - number of set repetitions o - number of sets of parameters z - number of frames per trigger
The gure above displays the fundamental structure of the sequencer module.
A sequence (o) is dened as a complete pass through all sets of parameters. 
The loop counter (m) represents the number of sequence repetitions.
The repeat counter (n) is used to control the amount of images taken with the respective sets of parameters.
The start of the sequencer can be realized directly (free running) or via an external event (trigger).
The additional frame counter (z) is used to create a half-automated sequencer. It is ab­solutely independent from the other three counters, and used to determine the number of frames per external trigger event.
The following timeline displays the temporal course of a sequence with:
n = 5 repetitions per set of parameters o = 3 sets of parameters (A,B and C) m = 1 sequence and z = 2 frames per trigger
Sequencer Parameter:
The mentioned sets of parameter include the fol­lowing:
Exposure time
Gain factor
Output line
Origin of ROI (Offset X, Y)
◄Figure39
Timeline for a single sequence
46
Baumer Optronic 9.7.2 Sequencer in Camera xml-le
Sequencer
Start
A
A
B
B
C
C
The Baumer Optronic seqencer is described in the category
“BOSequencer”
by the follow-
ing features:
<Category Name="BOSequencer" NameSpace="Custom">
<pFeature>BoSequencerEnable</pFeature> <pFeature>BoSequencerStart</pFeature> <pFeature>BoSequencerRunOnce</pFeature> <pFeature>BoSequencerFreeRun</pFeature> <pFeature>BoSequencerSetSelector</pFeature> <pFeature>BoSequencerLoops</pFeature> <pFeature>BoSequencerSetRepeats</pFeature> <pFeature>BoSequencerFramesPerTrigger</pFeature>
<pFeature>BoSequencerExposure</pFeature> <pFeature>BoSequencerGain</pFeature>
Enable / Disable
Start / Stop
Run Once / Cycle
Free Running / Trigger
Congure set of parameters
Number of sequences (m)
Number of repetitions (n)
Number of frames per trigger (z)
Parameter exposure
Parameter gain
</Category>
Examples9.7.3
9.7.3.1 Sequencer without Machine Cycle
Figure40►
Example for a fully auto­mated sequencer.
The gure  above  shows an example for a fully automated sequencer  with three sets of 
parameters (A,B and C). Here the repeat counter (n) is set to 5, the loop counter (m) has a value of 2.
When the sequencer is started, with or without an external event, the camera will record 5 images successively in each case, using the sets of parameters A, B and C (which con­stitutes a sequence). After that, the sequence is started once again, followed by a stop of the sequencer - in this case the parameters are maintained
47
9.7.3.2 Sequencer Controlled by Machine Steps (trigger)
A
A
B
B
C
C
Trigger
Sequencer
Start
The gure above shows an example for a half-automated sequencer  with  three  sets  of 
parameters (A,B and C) from the previous example. The frame counter (z) is set to 2. This means the camera records two pictures after an incoming trigger signal.
Capability Characteristics of 9.7.4 Baumer-GAPI Sequencer Module
◄Figure41
Example for a half-auto-
mated sequencer.
up to 128 sets of parameters up to 65536 loop passes up to 65536 repetitions of sets of parameters up to 65536 images per trigger event free running mode without initial trigger
48
9.7.5 Double Shutter
Trigger
Prevent Light
Exposure
Readout
Flash
This feature offers the possibility of capturing two images in a very short interval. Depend-
ing on the application, this is performed in conjunction with a ash unit. Thereby the rst 
exposure time (t sure time must be equal to, or longer than the readout time (t
pixels of the sensor are recepitve again shortly after the rst exposure. In order to realize  the second short exposure time without an  overrun  of  the  sensor, a second short ash 
must be employed, and any subsequent extraneous light prevented.
) is arbitrary and accompanied by the rst ash. The second expo-
exposure
) of the sensor. Thus the
readout
Figure42►
Example of a double shutter.
On Baumer VisiLine IP cameras this feature is realized within the sequencer.
In order to generate this sequence, the sequencer must be congured as follows:
Parameter Setting:
Sequencer Run Mode Once by Trigger
Sets of parameters (o) 2
Loops (m) 1
Repeats (n) 1
Frames Per Trigger (z) 2
Device Reset9.8
The feature Device Reset corresponds to the turn off and turn on of the camera. This is necessary after a parameterization (e.g. the network data) of the camera.
The interrupt of the power supply ist therefore no longer necessary.
49
9.9 User Sets
1122354
1122454
1122554
1122754
1123154
1123354
1122654
1123054
1123254
Four user sets (0-3) are available for the Baumer cameras of the VisiLine IP series. User
set 0 is the default set and contains the factory settings. User sets 1 to 3 are user-specic  and can contain any user denable parameters.
These user sets are stored within the camera and can be loaded, saved and transferred to other cameras of the VisiLine IP series.
By employing a so-called "user set default selector", one of the four possible user sets can be selected as default, which means, the camera starts up with these adjusted pa­rameters.
9.10 Factory Settings
The factory settings are stored in "user set 0" which is the default user set. This is the only user set, that is not editable.
9.11 Timestamp
The timestamp is part of the GigE Vision® standard. It is 64 bits long and denoted in Ticks*). Any image or event includes its corresponding timestamp.
At power on or reset, the timestamp starts running from zero.
◄Figure43
Timestamps of recorded
images.
*) Tick is the internal time unit of the camera, it lasts 1 nsec.
50
10. Interface Functionalities
10.1 Device Information
This Gigabit Ethernet-specic information on the device is part of the Discovery-Acknowl- edge of the camera.
Included information:
▪ MAC address
Current IP conguration (persistent IP /  DHCP / LLA) Current IP parameters ( IP address, subnet mask, gateway) Manufacturer's name Manufacturer-specic information Device version Serial number User-dened name (user programmable string)
Baumer Image Info Header10.2
The Baumer Image Info Header is a data packet, which is generated by the camera and integrated in the last data packet of every image, if chunk mode is activated.
Figure44►
Location of the Baumer Image Info Header
In this integrated data packet are different settings for this image. BGAPI can read the Image Info Header. Third Party Software, which supports the Chunk mode, can read the features in the table below. This settings are (not completely):
Feature Description
ChunkOffsetX Horizontal offset from the origin to the area of interest (in
pixels).
ChunkOffsetY Vertical offset from the origin to the area of interest (in pix-
els).
ChunkWidth Returns the Width of the image included in the payload.
ChunkHeight Returns the Height of the image included in the payload.
ChunkPixelFormat Returns the PixelFormat of the image included in the pay-
load.
ChunkExposureTime Returns the exposure time used to capture the image.
ChunkBlackLevelSelector Selects which Black Level to retrieve data from.
ChunkBlackLevel Returns the black level used to capture the image included
in the payload.
ChunkFrameID Returns the unique Identier of the frame (or image) includ-
ed in the payload.
51
10.3 Packet Size and Maximum Transmission Unit (MTU)
Network packets can be of different sizes. The size depends on the network components employed. When using GigE Vision®- compliant devices, it is generally recommended to use larger packets. On the one hand the overhead per packet is smaller, on the other hand larger packets cause less CPU load.
The packet size of UDP packets can differ from 576 Bytes up to the MTU.
The MTU describes the maximal packet size which can be handled by all network com­ponents involved.
In principle modern network hardware supports a packet size of 1500 Byte, which is specied in  the GigE network  standard. "Jumboframes" merely characterizes a packet size exceeding 1500 Bytes.
Baumer VisiLine IP cameras can handle a MTU of up to 65535 Bytes.
10.4 Inter Packet Gap
To achieve optimal results in image transfer, several Ethernet-specic factors need to be 
considered when using Baumer VisiLine IP cameras.
Upon starting the image transfer of a camera, the data packets are transferred at maxi­mum transfer speed (1 Gbit/sec). In accordance with the network standard, Baumer em­ploys a minimal separation of 12 Bytes between two packets. This separation is called
®
"inter packet gap" (IPG). In addition to the minimal IPG, the GigE Vision
standard stipu-
lates that  the IPG be scalable (user-dened).
IPG:
The IPG is measured in ticks. An easy rule of thumb is:
1 Tick is equivalent to 4 Bit
of data.
You should also not forget to add the various ethernet headers to your calculation.
52
Example 1: Multi Camera Operation – Minimal IPG10.4.1
Setting the IPG to minimum means every image is transfered at maximum speed. Even by using a frame rate of 1 fps this results in full load on the network. Such "bursts" can lead to an overload of several network components and a loss of packets. This can occur, especially when using several cameras.
▲Figure45
Operation of two camer­as employing a Gigabit Ethernet switch. Data processing within the switch is displayed
in the next two gures.
Figure46►
Operation of two cameras em­ploying aminimal inter packet gap (IPG).
In the case of two cameras sending images at the same time, this would theoretically oc­cur at a transfer rate of 2 Gbits/sec. The switch has to buffer this data and transfer it at a speed of 1 Gbit/sec afterwards. Depending on the internal buffer of the switch, this oper-
ates without any problems up to n cameras (n ≥ 1). More cameras would lead to a loss of 
packets. These lost packets can however be saved by employing an appropriate resend mechanism, but this leads to additional load on the network components.
Max. IPG:
On the Gigabit Ethernet the max. IPG and the data packet must not exceed 1 Gbit. Otherwise data pack­ets can be lost.
Figure50►
Operation of two camer­as employing an optimal inter packet gap (IPG).
Example 2: Multi Camera Operation – Optimal IPG10.4.2
A better method is to increase the IPG to a size of
optimal IPG = (number of cameras-1)*packet size + 2 × minimal IPG
In this way both data packets can be transferred successively (zipper principle), and the switch does not need to buffer the packets.
53
10.5 Transmission Delay
Another approach for packet sorting in multi-camera operation is the so-called Transmis­sion Delay.
Due to the fact, that the currently recorded image is stored within the camera and its
transmission starts with a predened delay, complete images can be transmitted to the  
PC at once.
The following gure should serve as an example:
For the image processing three cameras with different sensor resolutions are employed – for example camera 1: VLG-12M.I, camera 2: VLG-20M.I, camera 3: VLG-02M.I.
Due to process-related circumstances, the image acquisitions of all cameras end at the same time. Now the cameras are not trying to transmit their images simultaniously, but –
according to the specied transmission delays – subsequently. Thereby the rst camera 
starts the transmission immediately – with a transmission delay "0".
Time Saving in Multi-Camera Operation10.5.1
As previously stated, the transmission delay feature was especially designed for multi-
camera operation  with employment of different camera models. Just here an  signicant 
acceleration of the image transmission can be achieved:
◄Figure47
Principle of the trans-
mission delay.
◄Figure48
Comparison of trans-
mission delay and inter
packet gap, employed
for a multi-camera sys-
tem with different cam-
era models.
For the above mentioned example, the employment of the transmission delay feature results in a time saving – compared to the approach of using the inter packet gap – of ap­prox. 45% (applied to the transmission of all three images).
54
CongurationExample10.5.2
Camera 1
(TXG13)
Trigger
Camera 2
(TXG06)
Camera 3
(TXG03)
t
exposure(Camera 1)
t
exposure(Camera 2)
t
exposure(Camera 3)
t
readout(Camera 3)
t
transferGigE(Camera 3)
t
readout(Camera 2)
t
transferGigE(Camera 2)
t
readout(Camera 1)
t
transfer(Camera 1)*
TransmissionDelay Camera 2
TransmissionDelay Camera 3
For the three employed cameras the following data are known:
Timings:
A - exposure start for all cameras B - all cameras ready for transmission C - transmission start
camera 2
D - transmission start
camera 3
Camera
Model
Sensor
Resolution
[Pixel]
Pixel Format
(Pixel Depth)
[bit]
Resulting
Data Volume
[bit]
Readout
Time
[msec]
Exposure
Time
[msec]
Transfer
Time (GigE)
[msec]
VLG-12M.I 1288 x 960 8 9891840 23.8 32 ≈ 9.2
VLG-20M.I 1624 x 1228 8 15954176 37 32 ≈ 14.9
VLG-02M.I 656 x 490 8 2571520 6.4 32 ≈ 2.4
The sensor resolution and the readout time (t
) can be found in the respective
readout
Technical Data Sheet (TDS). For the example a full frame resolution is used. The exposure time (t
) is manually set to 32 msec.
exposure
The resulting data volume is calculated as follows:
Resulting Data Volume = horizontal Pixels × vertical Pixels × Pixel Depth
The transfer time (t
transferGigE
) for full GigE transfer rate is calculated as follows:
Transfer Time (GigE) = Resulting Data Volume / 10243 × 1000 [msec]
All the cameras are triggered simultaneously.
The transmission delay is realized as a counter, that is started immediately after the sen­sor readout is started.
* Due to technical issues the data transfer of camera 1 does not take place with full GigE speed.
Timing diagram for the transmission delay of the three employed cameras, using even exposure times.
Figure48►
55
In general, the transmission delay is calculated as:
++=
n
3n
)1nCamera(gEtransferGi)nCamera(osureexp)1Camera(readout)1Camera(osureexp)nCamera(onDelayTransmissi
ttttt
Therewith for the example, the transmission delays of camera 2 and 3 are calculated as follows:
t
TransmissionDelay(Camera 2)
t
TransmissionDelay(Camera 3)
= t
exposure(Camera 1)
= t
exposure(Camera 1)
+ t
readout(Camera 1)
+ t
readout(Camera 1)
- t
exposure(Camera 2)
- t
exposure(Camera 3)
+ t
transferGige(Camera 2)
Solving this equations leads to:
t
TransmissionDelay(Camera 2)
= 32 msec + 23.8 msec - 32 msec
= 23.8 msec
= 7437750 ticks
t
TransmissionDelay(Camera 3)
= 32 msec + 23.8 msec - 32 msec + 14.9 msec
= 38,7 msec
= 1209375 ticks
Notice
In BGAPI the delay is specied in ticks. How do convert microseconds into ticks?
1 tick = 1 ns
1 msec = 1000000 ns
1 tick = 0,000001 msec
ticks= t
TransmissionDelay
[msec] / 0,000001 = t
TransmissionDelay
[ticks]
56
Multicast Addresses:
For multicasting Baumer suggests an adress range from 232.0.1.0 to
232.255.255.255.
Multicast10.6
Multicasting offers the possibility to send data packets to more than one destination ad­dress – without multiplying bandwidth between camera and Multicast device (e.g. Router or Switch).
The data is sent out to an intelligent network node, an IGMP (Internet Group Management
Protocol) capable Switch or Router and distributed to the receiver group with the specic 
address range.
In the example on the gure below, multicast is used to process image and message data 
separately on two differents PC's.
Figure49►
Principle of Multicast
57
10.7 IPConguration
10.7.1 Persistent IP
A persistent IP adress is assigned permanently. Its validity is unlimited.
Notice
Please ensure a valid combination of IP address and subnet mask.
IP range: Subnet mask:
0.0.0.0 – 127.255.255.255 255.0.0.0
128.0.0.0 – 191.255.255.255 255.255.0.0
192.0.0.0 – 223.255.255.255 255.255.255.0
These combinations are not checked by Baumer-GAPI, Baumer-GAPI Viewer or cam-
era on the y. This check is performed when restarting the camera,  in case of an invalid 
IP - subnet combination the camera will start in LLA mode.
* This feature is disabled by default.
10.7.2 DHCP(DynamicHostCongurationProtocol)
The DHCP automates the assignment of network parameters such as IP addresses, sub­net masks and gateways. This process takes up to 12 sec.
Internet Protocol:
On Baumer cameras IP v4 is employed.
Figure50▲
Connection pathway for Baumer Gigabit Ether­net cameras: The device connects step by step via the three described mecha­nisms.
Once the device (client) is connected to a DHCP-enabled network, four steps are processed:
▪ DHCP Discovery
In order to nd a DHCP server, the client sends a so called DHCPDISCOVER broad­cast to the network.
▪ DHCP Offer
After reception of this broadcast, the DHCP server will answer the request by an unicast, known as DHCPOFFER. This message contains several items of information, such as:
Information for the client
MAC address
offered IP address
IP adress
Information on server
subnet mask
duration of the lease
DHCP:
Please pay attention to the DHCP Lease Time.
◄Figure51
DHCP Discovery
(broadcast)
◄Figure52
DHCP offer (unicast)
58
Figure53►
DHCP Request (broadcast)
DHCP Lease Time:
The validity of DHCP IP addresses is limited by the lease time. When this time is elapsed, the IP congu­ration needs to be redone. This causes a connection abort.
▪ DHCP Request
Once the client has received this DHCPOFFER, the transaction needs to be con-
rmed. For this purpose the client sends a so called DHCPREQUEST broadcast to the 
network. This message contains the IP address of the offering DHCP server and informs all other possible DHCPservers that the client has obtained all the necessary information, and there is therefore no need to issue IP information to the client.
▪ DHCP Acknowledgement
Once the DHCP server obtains the DHCPREQUEST, an unicast containing all neces- sary information is sent to the client. This message is called DHCPACK. According to this information, the client will congure its IP parameters and the pro­cess is complete.
Figure54►
DHCP Acknowledge­ment (unicast)
LLA:
Please ensure operation of the PC within the same subnet as the camera.
10.7.3 LLA
LLA (Link-Local Address) refers to a local IP range from 169.254.0.1 to 169.254.254.254 and is used for the automated assignment of an IP address to a device when no other method for IP assignment is available.
The IP address is determined by the host, using a pseudo-random number generator, which operates in the IP range mentioned above.
Once an address is chosen, this is sent together with an ARP (Address Resolution Pro­tocol) query to the network to to check if it already exists. Depending on the response, the IP address will be assigned to the device (if not existing) or the process is repeated. This method may take some time - the GigE Vision connection in the LLA should not take longer than 40 seconds, in the worst case it can take up to several minutes.
10.7.4 Force IP
*)
Inadvertent faulty operation may result in connection errors between the PC and the camera. In this case "Force IP" may be the last resort. The Force IP mechanism sends an IP ad­dress and a subnet mask to the MAC address of the camera. These settings are sent
without verication and are adapted immediately by the client. They remain valid until the 
camera is de-energized.
®
standard stipulates that establishing
*) In the GigE Vision® standard, this feature is dened as "Static IP".
59
10.8 Packet Resend
Due to the fact, that the GigE Vision® standard stipulates using a UDP - a stateless user datagram protocol - for data transfer, a mechanism for saving the "lost" data needs to be employed.
Here, a resend request is initiated if one or more packets are damaged during transfer and - due to an incorrect checksum - rejected afterwards.
On this topic one must distinguish between three cases:
Normal Case10.8.1
In the case of unproblematic data transfer, all packets are transferred in their correct order from the camera to the PC. The probability of this happening is more then 99%.
Fault 1: 10.8.2 Lost Packet within Data Stream
If one or more packets are lost within the data stream, this is detected by the fact, that packet number n is not followed by packet number (n+1). In this case the application sends a resend request (A). Following this request, the camera sends the next packet and then resends (B) the lost packet.
In our example packet no. 3 is lost. This fault is detected on packet no. 4, and the re­send request triggered. Then the camera sends packet no. 5, followed by resending packet no. 3.
◄Figure55
Data stream without
damaged or lost pack-
ets.
◄Figure56
Resending lost packets
within the data stream.
Fault 2: 10.8.3 Lost Packet at the End of the Data Stream
In case of a fault at the end of the data stream, the application will wait for incoming pack-
ets for a predened time. When this time has elapsed, the resend request is triggered and 
the "lost" packets will be resent.
60
Figure57►
Resending of lost pack­ets at the end of the data stream.
In our example, packets from no. 3 to no. 5 are lost. This fault is detected after the
predened time has elapsed and the resend request  (A)  is  triggered.  The  camera  then 
resends packets no. 3 to no. 5 (B) to complete the image transfer.
Termination Conditions10.8.4
The resend mechanism will continue until:
all packets have reached the pc the maximum of resend repetitions is reached the resend timeout has occured or the camera returns an error.
61
10.9 Message Channel
The asynchronous message channel is described in the GigE Vision® standard and of­fers the possibility of event signaling. There is a timestamp (64 bits) for each announced event, which contains the accurate time the event occurred. Each event can be activated and deactivated separately.
10.9.1 Event Generation
Event Description
Gen<i>Cam™
ExposureStart Exposure started
ExposureEnd Exposure ended
FrameStart Acquisition of a frame started
FrameEnd Acquisition of a frame ended
Line0Rising Rising edge detected on IO-Line 0
Line0Falling Falling edge detected on IO-Line 0
Line1Rising Rising edge detected on IO-Line 1
Line1Falling Falling edge detected on IO-Line 1
Line2Rising Rising edge detected on IO-Line 2
Line2Falling Falling edge detected on IO-Line 2
Line3Rising Rising edge detected on IO-Line 3
Line3Falling Falling edge detected on IO-Line 3
Vendor-specic
EventError Error in event handling
EventLost Occured event not analyzed
TriggerReady t
TriggerOverlapped Overlapped Mode detected
TriggerSkipped Camera overtriggered
elapsed, camera is able to
notready
process incoming trigger
62
10.10 Action Command / Trigger over Ethernet
The basic idea behind this feature was to achieve a simultaneous trigger for multiple cameras.
Action Command:
Since hardware release 2.1 the implemetation of the Action Command follows the regulations of the GigE
®
Vision
standard 1.2.
Therefore a broadcast ethernet packet was implemented. This packet can be used to induce a trigger as well as other actions.
Due to the fact that different network components feature different latencies and jitters, the trigger over the Ethernet is not as synchronous as a hardware trigger. Nevertheless, applications can deal with these jitters in switched networks, and therefore this is a com­fortable method for synchronizing cameras with software additions.
The action command is sent as a broadcast. In addition it is possible to group cameras, so that not all attached cameras respond to a broadcast action command.
Such an action command contains:
a Device Key - for authorization of the action on this device an Action ID  -  for identication of the action signal a Group Key - for triggering actions on separated groups of devices a Group Mask - for extension of the range of separate device groups
Example: Triggering Multiple Cameras10.10.1
The gure below displays  three  cameras,  which are  triggered  synchronously by  a soft­ware application.
Figure58►
Triggering of multiple cameras via trigger over Ethernet (ToE).
Another application of action command is that a secondary application or PC or one of the attached cameras can actuate the trigger.
63
11. Start-Stop-Behaviour
Start / Stop / Abort 11.1 Acquisition (Camera)
Once the image acquisition is started, three steps are processed within the camera:
Determination of the current set of image parameters
▪ Exposure of the sensor
Readout of the sensor.
Afterwards a repetition of this process takes place until the camera is stopped.
Stopping the acquisition means that the process mentioned above is aborted. If the stop signal occurs within a readout, the current readout  will  be  nished  before  stopping  the  camera. If the stop signal arrives within an exposure, this will be aborted.
Abort Acquisition
The acquisition abort represents a special case of stopping the current acquisition.
When an exposure is running, the exposure is aborted immediately and the image is not read out.
Start / Stop 11.2 Interface
Without starting the interface, transmission of image data from the camera to the PC will not proceed. If the image acquisition is started before the interface is activated, the re­corded images are lost.
If the interface is stopped during a transmission, this is aborted immediately.
11.3 Acquisition Modes
In general, three acquisition modes are available for the cameras in the Baumer VisiLine IP series.
Free Running11.3.1
Free running means the camera records images continuously without external events.
11.3.2 Trigger
The basic idea behind the trigger mode is the synchronization of cameras with machine cycles. Trigger mode means that image recording is not continuous, but triggered by external events.
This feature is described in chapter 4.6. Process Interface.
11.3.3 Sequencer
A sequencer is used for the automated control of series of images, using different settings for exposure time and gain.
64
Cleaning12.
volatile
solvents
Cover glass
Notice
The sensor is mounted dust-proof. Remove of the cover glass for cleaning is not neces­sary.
Avoid cleaning the cover glass of the sensor if possible. To prevent dust, follow the in­structions under "Install lens".
If you must clean it, use compressed air or a soft, lint free cloth dampened with a small quantity of pure alcohol.
Housing
Caution!
Volatile solvents for cleaning. Volatile solvents damage the surface of the camera. Never use volatile solvents (benzine, thinner) for cleaning!
To clean the surface of the camera housing, use a soft, dry cloth. To remove persistent stains, use a soft cloth dampened with a small quantity of neutral detergent, then wipe dry.
Transport / Storage13.
Notice
Transport the camera only in the original packaging. When the camera is not installed, then storage the camera in original packaging.
Storage Environment
Storage temperature -10°C ... +70°C ( +14°F ... +158°F)
Storage Humidy 10% ... 90% non condensing
Disposal14.
Dispose of outdated products with electrical or electronic circuits, not in the normal domestic waste, but rather according to your national law and the directives 2002/96/EC and 2006/66/EC for recycling within the competent collectors.
Through the proper disposal of obsolete equipment will help to save valu­able resources and prevent possible adverse effects on human health and the environment.
The return of the packaging to the material cycle helps conserve raw mate­rials an reduces the production of waste. When no longer required, dispose of the packaging materials in accordance with the local regulations in force.
Keep the original packaging during the warranty period in order to be able to pack the device properly in the event of a warranty claim.
65
Warranty Notes15.
Notice
If it is obvious that the device is / was dismantled, reworked or repaired by other than Baumer technicians, Baumer Optronic will not take any responsibility for the subse­quent performance and quality of the device!
Support16.
If you have any problems with the camera, then feel free to contact our support.
Worldwide
Baumer Optronic GmbH
Badstrasse 30 DE-01454 Radeberg, Germany
Tel: +49 (0)3528 4386 845 mail: support.cameras@baumer.com
Website: www.baumer.com
66
Conformity17.
Cameras of the Baumer VisiLine IP family comply with:
CE, FCC Part 15 Class B, RoHS
CE17.1
We declare, under our sole responsibility, that the previously described Baumer HXC cameras conform with the directives of the CE.
FCC – Class B Device17.2
No t e : This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules. These limits are designed to pro­vide reasonable protection against harmful interference in a residential environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructios, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occure in a particular installation. If this equipment does cause harmful interference to radio or televi­sion reception, which can be determined by turning the equipment off an on, the user is encouraged to try to correct the interference by one or more of the following measures:
Reorient or relocate the receiving antenna. Increase the separation between the equipment and the receiver. Connect the equipment into an outlet on a circuit different from that to which the receiver is connected. Consult the dealer or an experienced radio/TV technician for help.
67
Baumer Optronic GmbH
Baumer Optronic GmbH
Badstrasse 30 DE-01454 Radeberg, Germany Phone +49 (0)3528 4386 0 · Fax +49 (0)3528 4386 86 sales@baumeroptronic.com · www.baumer.com
DE-01454 Radeberg, Germany Phone +49 (0)3528 4386 0 · Fax +49 (0)3528 4386 86 sales@baumeroptronic.com · www.baumer.com
Technical data has been fully checked, but accuracy of printed matter not guaranteed.
Subject to change without notice. Printed in Germany 05/13. v1.1 11110804
Loading...