17.1 CE ....................................................................................................................... 66
17.2 FCC – Class B Device ........................................................................................ 66
6
General Information1.
Pi ctogram
Thanks for purchasing a camera of the Baumer family. This User´s Guide describes how
to connect, set up and use the camera.
Read this manual carefully and observe the notes and safety instructions!
Target group for this User´s Guide
This User's Guide is aimed at experienced users, which want to integrate camera(s) into
a vision system.
Copyright
Any duplication or reprinting of this documentation, in whole or in part, and the reproduc-
tion of the illustrations even in modied form is permitted only with the written approval of
Baumer. This document is subject to change without notice.
Classicationofthesafetyinstructions
In the User´s Guide, the safety instructions are classied as follows:
Notice
Gives helpful notes on operation or other general recommendations.
Caution
Indicates a possibly dangerous situation. If the situation is not avoided, slight
or minor injury could result or the device may be damaged.
7
General safety instructions2.
Caution
Heat can damage the camera. Provide adequate dissipation of heat, to
ensure that the temperatures does not exceedthe value (see Heat Transmission).
As there are numerous possibilities for installation, Baumer does not
specify a specic method for proper heat dissipation.
Intended Use3.
The camera is used to capture images that can be transferred over a GigE interface to a
PC.
8
General Description4.
1
2
4
5
3
No. DescriptionNo. Description
1Tube4Power supply / Digital-IO
2C-Mount lens connection5Data- / PoE-Interface
3LED´s
All Baumer Gigabit Ethernet cameras of the VisiLine IP family are characterized by:
Best image qualityLow noise and structure-free image information ▪
High quality mode with minimum noise ▪
Flexible image acquisitionIndustrially compliant process interface with ▪
parameter setting capability (trigger and ash)
Fast image transferReliable transmission up to 1000 Mbit/sec accord- ▪
ing to IEEE802.3
Cable length up to 100 m ▪
PoE (Power over Ethernet) ▪
Baumer driver for high data volume with low CPU ▪
load
High-speed multi-camera operation ▪
GAPI) for all Baumer cameras
Powerful ▪Software Development Kit (SDK) with
sample codes and help les for simple integration
Baumer viewer for all camera functions ▪
▪ Gen<I>Cam™ compliant XML le to describe the
camera functions
Supplied with installation program with automatic ▪
Compact designProtection class IP 65/67 ▪
Reliable operationState-of-the-art camera electronics and precision ▪
camera recognition for simple commissioning
Light weight ▪exible assembly▪
mechanics
Low power consumption and minimal heat genera- ▪
tion
®
compliant
9
Camera Models5.
2 - M3 depth 5
4 - M3 depth 5
2 - M3 depth 5
12,912,920,2
26
45,852,912,9
46
46
14 3
49,5ø
8,3
41,53,1
26
8,3
Camera Type
Sensor
Size
Resolution
CCD Sensor (monochrome / color)
VLG-02M.I / VLG-02C.I1/4"656 x 490160
VLG-12M.I / VLG-12C.I1/3"1288 x 96042
VLG-20M.I / VLG-20C.I1/1.8"1624 x 1228 27
CMOS Sensor (monochrome / color)
VLG-22M.I / VLG-22C.I2/3"2044 x 108455
VLG-40M.I / VLG-40C.I 1"2044 x 204429
Dimensions
Full
Frames
[max. fps]
◄Figure1
Dimensions of a
Baumer VisiLine IP
camera
10
Installation6.
T
Lens mounting
Notice
Avoid contamination of the sensor and the lens by dust and airborne particles when
mounting the support or the lens to the device!
Therefore the following points are very important:
Install the camera in an environment that is as dust free as possible! ▪
Keep the dust cover (bag) on camera as long as possible! ▪
Hold the print with the sensor downwards with unprotected sensor. ▪
Avoid contact with any optical surface of the camera! ▪
* If the environmental temperature exceeds the values listed in the table below, the camera must be cooled. (see Heat Transmission)
Figure2►
Temperature measuring
point
Humidity
Storage and Operating Humidity10% ... 90%
Non-condensing
Heat Transmission6.2
Caution
Heat can damage the camera. Provide adequate dissipation of heat, to
ensure that the temperature does not exceed 50°C (122°F).
As there are numerous possibilities for installation, Baumer does not
specify a specic method for proper heat dissipation.
Measure PointMaximal Temperature
T50°C (122°F)
11
7. Pin-Assignment
1
2
3
4
5
6
7
8
2
1
8
7
6
5
4
3
1
2
7.1 Power Supply and Digital IOs
Power supply / Digital-IO
(SACC-CI-M12MS-8CON-SH TOR 32)
wire colors of the connecting cable
1OUT 3white7Power GNDblue
2Power Vcc+brown8OUT 2red
3IN 1green
4IO GNDyellow
5U
OUTgrey
ext
6OUT 1pink
Notice
The electrical data are available in the
respective data sheet.
Ethernet 7.2 Interface (PoE)
Notice
The VisiLine IP supports PoE (Power over Ethernet) IEEE 802.3af Clause 33, 48V
Power supply.
Ethernet
(SACC-CI-M12FS-8CON-L180-10G)
1MX1+white
2MX1-brown
3MX2+green
4MX2-yellow
5MX4+grey
6MX4-pink
7MX3-blue
8MX3+red
7.2.1 LED Signaling
◄Figure3
LED positions on Baum-
LEDSignalMeaning
1
2yellowTransmitting
greenLink active
green ashReceiving
er VisiLine cameras.
12
4005006007008009001000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-02M.I
400450500550600650700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-02C.I
4005006007008009001000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-12M.I
400450500550600650700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-12C.I
4005006007008009001000
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-20M.I
400450500550600650700
0
0 2
0 4
0 6
0 8
1 0
Wave Length [nm]
Relative Response
VLG-20C.I
Figure4►
Spectral sensitivities for
Baumer cameras with
0.3 MP CCD sensor.
8. ProductSpecications
8.1 Spectral Sensitivity
The spectral sensitivity characteristics of monochrome and color matrix sensors for VisiLine IP cameras are displayed in the following graphs. The characteristic curves for the
sensors do not take the characteristics of lenses and light sources without lters into
consideration.
Values relating to the respective technical data sheets of the sensors.
Spectral sensitivities for
Baumer cameras with
1,2 MP CCD sensor.
Spectral sensitivities for
Baumer cameras with
2.0 MP CCD sensor.
Figure5►
Figure6►
13
350450550650750850950 1050
Wave Length [nm]
Quantum Efficiency [%]
VLG-22M.I / VLG-40M.I
350450550650750850950 1050
Wave Length [nm]
Quantum Efficiency [%]
VLG-22C.I / VLG-40C.I
◄Figure7
Spectral sensitivities
for Baumer cameras
with 5.0, 4,0 MP CMOS
sensor.
14
8.2 Field of View Position
photosensitive
surface of the
sensor
cover glas
of sensor
thickness: D
front cover glass
thickness: 1 ± 0.1 mm
optical path
c-mount (17.526 mm)
0 5
0,5
± α
±XR
±YR
±XM
±YM
± Z
A
14,6
60,2
7,2
15,6
The typical accuracy by assumption of the root mean square value is displayed in the
gure and the table below:
Sensor accuracy of the
Baumer VisiLine IP
Figure8►
Camera
Type
± XM
[mm]
± YM
[mm]
± XR
[mm]
± YR
[mm]
± z
Zyp
[mm]
± α
typ
[°]
VLG.I-02*0.090.090.090.090.0250.716.10.75
VLG.I-12*0.060.060.060.060.0250.716.60.5
VLG.I-20*0.060.060.060.060.0250.716.60.5
VLG.I-22*0.070,070.070.070.0250,516.20.55 ± 0.05
VLG.I-40*0.070,070.070.070.0250,516.20.55 ± 0.05
typical accuracy by assumption of the root mean square value
* C or M
** Dimension D in this table is from manufacturer datasheet (edition 06/2012)
A
[mm]
D**
[mm]
15
8.3 Acquisition Modes and Timings
Exposure
Readout
Exposure
Readout
Exposure
Readout
Flash
t
exposure(n)
t
flash(n)
t
flashdelay
t
flash(n+1)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
The image acquisition consists of two separate, successively processed components.
Exposing the pixels on the photosensitive surface of the sensor is only the rst part of the
image acquisition. After completion of the rst step, the pixels are read out.
Thereby the exposure time (t
ed for the readout (t
) is given by the particular sensor and image format.
readout
) can be adjusted by the user, however, the time need-
exposure
Baumer cameras can be operated with three modes, the Free Running Mode, the Fixed-Frame-Rate Mode and the Trigger Mode.
*)
The cameras can be operated non-overlapped
or overlapped. Depending on the mode
used, and the combination of exposure and readout time:
Non-overlapped OperationOverlapped Operation
Here the time intervals are long enough
to process exposure and readout successively.
In this operation the exposure of a frame
(n+1) takes place during the readout of
frame (n).
8.3.1 Free Running Mode
In the "Free Running" mode the camera records images permanently and sends them to
the PC. In order to achieve an optimal result (with regard to the adjusted exposure time
t
and image format) the camera is operated overlapped.
exposure
In case of exposure times equal to / less than the readout time (t
exposure
≤ t
mum frame rate is provided for the image format used. For longer exposure times the
frame rate of the camera is reduced.
), the maxi-
readout
t
ash
= t
exposure
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
Image parameters:
Offset
Gain
Mode
Partial Scan
*) Non-overlapped means the same as sequential.
16
Fixed-Frame-Rate Mode8.3.2
With this feature Baumer introduces a clever technique to the VisiLine IP camera series,
that enables the user to predene a desired frame rate in continous mode.
For the employment of this mode the cameras are equipped with an internal clock generator that creates trigger pulses.
Notice
From a certain frame rate, skipping internal triggers is unavoidable. In general, this depends on the combination of adjusted frame rate, exposure and readout times.
17
8.3.3 Trigger Mode
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
After a specied external event (trigger) has occurred, image acquisition is started. Depending on the interval of triggers used, the camera operates non-overlapped or overlapped in this mode.
With regard to timings in the trigger mode, the following basic formulas need to be taken
into consideration:
CaseFormula
t
exposure
t
exposure
< t
> t
readout
readout
(1)t
(2)t
(3)t
(4)t
earliestpossibletrigger(n+1)
notready(n+1)
earliestpossibletrigger(n+1)
notready(n+1)
= t
exposure(n)
= t
exposure(n)
= t
readout(n)
+ t
= t
exposure(n)
- t
readout(n)
exposure(n+1)
- t
exposure(n+1)
8.3.3.1 Overlapped Operation: t
exposure(n+2)
= t
exposure(n+1)
In overlapped operation attention should be paid to the time interval where the camera is
unable to process occuring trigger signals (t
exposures. When this process time t
notready
). This interval is situated between two
notready
has elapsed, the camera is able to react to
external events again.
After t
age (t
has elapsed, the timing of (E) depends on the readout time of the current im-
notready
) and exposure time of the next image (t
readout(n)
exposure(n+1)
). It can be determined by the
formulas mentioned above (no. 1 or 3, as is the case).
In case of identical exposure times, t
remains the same from acquisition to acquisi-
notready
tion.
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
E - earliest possible trigger
Image parameters:
Offset
Gain
Mode
Partial Scan
18
8.3.3.2 Overlapped Operation: t
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
exposure(n+2)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
exposure(n+2)
> t
exposure(n+1)
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
E - earliest possible trigger
If the exposure time (t
tion, the time the camera is unable to process occurring trigger signals (t
) is increased from the current acquisition to the next acquisi-
exposure
notready
) is scaled
down.
This can be simulated with the formulas mentioned above (no. 2 or 4, as is the case).
Image parameters:
Offset
Gain
Mode
Partial Scan
19
8.3.3.3 Overlapped Operation: t
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
exposure(n+2
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
exposure(n+2)
< t
exposure(n+1)
If the exposure time (t
tion, the time the camera is unable to process occurring trigger signals (t
) is decreased from the current acquisition to the next acquisi-
exposure
notready
) is scaled
up.
When decreasing the t
exposure
such, that t
exceeds the pause between two incoming
notready
trigger signals, the camera is unable to process this trigger and the acquisition of the image will not start (the trigger will be skipped).
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
E - earliest possible trigger
F - frame not started /
trigger skipped
Notice
From a certain frequency of the trigger signal, skipping triggers is unavoidable. In general, this frequency depends on the combination of exposure and readout times.
Image parameters:
Offset
Gain
Mode
Partial Scan
20
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
t
triggerdelay
t
min
Trigger
Flash
t
flash(n)
t
flashdelay
t
flash(n+1)
TriggerReady
t
notready
Timings:
A - exposure time
frame (n) effective
B - image parameters
frame (n) effective
C - exposure time
frame (n+1) effective
D - image parameters
frame (n+1) effective
E - earliest possible trigger
8.3.3.4 Non-overlapped Operation
If the frequency of the trigger signal is selected for long enough, so that the image acquisitions (t
exposure
+ t
) run successively, the camera operates non-overlapped.
readout
Image parameters:
Offset
Gain
Mode
Partial Scan
21
Advanced Timings for 8.3.4 GigE Vision® Message Channel
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
TriggerReady
t
notready
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
TriggerReady
t
notready
TriggerSkipped
The following charts show some timings for the event signaling by the asynchronous
message channel. Vendor-specic events like "TriggerReady", "TriggerSkipped", "TriggerOverlapped" and "ReadoutActive" are explained.
8.3.4.1 TriggerReady
This event signals whether the camera is able to process incoming trigger signals or not.
8.3.4.2 TriggerSkipped
If the camera is unable to process incoming trigger signals, which means the camera
should be triggered within the interval t
Line IP cameras the user will be informed about this fact by means of the event "TriggerSkipped".
, these triggers are skipped. On Baumer Visi-
notready
22
8.3.4.3 TriggerOverlapped
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
Trigger
Overlapped
Exposure
Readout
t
exposure(n)
t
readout(n+1)
t
readout(n)
t
exposure(n+1)
Trigger
Readout
Active
This signal is active, as long as the sensor is exposed and read out at the same time.
which means the camera is operated overlapped.
Once a valid trigger signal occures not within a readout, the "TriggerOverlapped" signal
changes to state low.
8.3.4.4 ReadoutActive
While the sensor is read out, the camera signals this by means of "ReadoutActive".
23
8.4 Software
8.4.1 Baumer GAPI
Baumer GAPI stands for Baumer “Generic Application Programming Interface”. With this
API Baumer provides an interface for optimal integration and control of Baumer cameras.
This software interface allows changing to other camera models.
It provides interfaces to several programming languages, such as C, C++ and the .NET™
®
Framework on Windows
, as well as Mono on Linux® operating systems, which offers the
use of other languages, such as e.g. C# or VB.NET.
8.4.2 3rd Party Software
Strict compliance with the Gen<I>Cam™ standard allows Baumer to offer the use of 3rd
Party Software for operation with cameras of the VisiLine IP family.
rd
You can nd a current listing of 3
bination with Baumer cameras, at http://www.baumer.com/de-en/produkte/identication-
Party Software, which was tested successfully in com-
24
Camera 9. Functionalities
Image 9.1 Acquisition
9.1.1 Image Format
A digital camera usually delivers image data in at least one format - the native resolution
of the sensor. Baumer cameras are able to provide several image formats (depending on
the type of camera).
Compared with standard cameras, the image format on Baumer cameras not only in-
cludes resolution, but a set of predened parameter.
These parameters are:
▪ Resolution (horizontal and vertical dimensions in pixels)▪ Binning Mode
Camera Type
Monochrome
VLG-02M.I■■■■
VLG-12M.I■■■■
VLG-20M.I■■■■
VLG-22M.I■□□□
VLG-40M.I■□□□
Color
VLG-02C.I■■■■
VLG-12C.I■■■■
VLG-20C.I■■■■
VLG-22C.I■□□□
VLG-40C.I■□□□
Full frame
Binning 2x2
Binning 1x2
Binning 2x1
25
9.1.2 Pixel Format
Red
Green
Blue
Black
White
On Baumer digital cameras the pixel format depends on the selected image format.
Denitions9.1.2.1
RAW:Raw data format. Here the data are stored without processing.
Bayer:Raw data format of color sensors.
Color lters are placed on these sensors in a checkerboard pattern, generally
in a 50% green, 25% red and 25% blue array.
Mono:Monochrome. The color range of mono images consists of shades of a single
color. In general, shades of gray or black-and-white are synonyms for monochrome.
RGB:Color model, in which all detectable colors are dened by three coordinates,
Red, Green and Blue.
◄Figure9
Sensor with Bayer
Pattern
The three coordinates are displayed within the buffer in the order R, G, B.
BGR:Here the color alignment mirrors RGB.
YUV:Color model, which is used in the PAL TV standard and in image compression.
In YUV, a high bandwidth luminance signal (Y: luma information) is transmitted
together with two color difference signals with low bandwidth (U and V: chroma
information). Thereby U represents the difference between blue and luminance
(U = B - Y), V is the difference between red and luminance (V = R - Y). The third
color, green, does not need to be transmitted, its value can be calculated from
the other three values.
YUV 4:4:4Here each of the three components has the same sample rate.
Therefore there is no subsampling here.
YUV 4:2:2The chroma components are sampled at half the sample rate.
This reduces the necessary bandwidth to two-thirds (in relation to
4:4:4) and causes no, or low visual differences.
YUV 4:1:1Here the chroma components are sampled at a quarter of the
sample rate.This decreases the necessary bandwith by half (in
relation to 4:4:4).
◄Figure10
RBG color space displayed as color tube.
26
Byte 1Byte 2Byte 3
Byte 1Byte 2
unused bits
Byte 1Byte 2Byte 3
Pixel 0Pixel 1
Figure11►
Bit string of Mono 8 bit
and RGB 8 bit.
Figure12►
Spreading of Mono 12
bit over two bytes.
Figure13►
Spreading of two pixels in
Mono 12 bit over three bytes
(packed mode).
Pixel depth: In general, pixel depth denes the number of possible different values for
each color channel. Mostly this will be 8 bit, which means 28 different "colors".
For RGB or BGR these 8 bits per channel equal 24 bits overall.
Two bytes are needed for transmitting more than 8 bits per pixel - even if the
second byte is not completely lled with data. In order to save bandwidth,
the packed formats were introduced to Baumer VisiLine IP cameras. In this
formats, the unused bits of one pixel are lled with data from the next pixel.
8 bit:
12 bit:
Packed:
Pixel Formats on Baumer VisiLine IP Cameras9.1.2.2
Camera Type
Monochrome
VLG-02M.I■■■□□□□□□□□
VLG-12M.I■■■□□□□□□□□
VLG-20M.I■■■□□□□□□□□
VLG-22M.I■■□□□□□□□□□
VLG-40M.I■■□□□□□□□□□
Color
VLG-02C.I■□□■□■■■■■■
VLG-12C.I■□□■□■■■■■■
VLG-20C.I■□□■□■■■■■■
VLG-22C.I□□□■■■□□□□□
VLG-40C.I□□□■■■□□□□□
Mono 8
Mono 12
Mono 12 Packed
Bayer RG 8
Bayer RG 10
Bayer RG 12
RGB 8
BGR 8
YUV8_UYV
YUV422_8_UYVY
YUV411_8_UYYVYY
27
9.1.3 Exposure Time
Light
Photon
Pixel
Charge Carrie
r
On exposure of the sensor, the inclination of photons produces a charge separation on
the semiconductors of the pixels. This results in a voltage difference, which is used for
signal extraction.
The signal strength is inuenced by the incoming amount of photons. It can be increased
by increasing the exposure time (t
On Baumer VisiLine IP cameras, the exposure time can be set within the following ranges
(step size 1μsec):
exposure
).
◄Figure14
Incidence of light causes
charge separation on
the semiconductors of
the sensor.
CMOS sensors exhibit nonuniformities that are often called xed pattern noise (FPN).
However it is no noise but a xed variation from pixel to pixel that can be corrected. The
advantage of using this correction is a more homogeneous picture which may simplify the
image analysis. Variations from pixel to pixel of the dark signal are called dark signal nonuniformity (DSNU) whereas photo response nonuniformity (PRNU) describes variations
of the sensitivity. DNSU is corrected via an offset while PRNU is corrected by a factor.
The correction is based on columns. It is important that the correction values are comput-
ed for the used sensor readout conguration. During camera production this is derived for
the factory defaults. If other settings are used (e.g. different number of readout channels)
using this correction with the default data set may degrade the image quality. In this case
the user may derive a specic data set for the used setup.
PRNU / DSNU Correction OffPRNU / DSNU Correction On
29
HDR (High Dynamic Range)9.1.5
L
ow Illumination
High
Illumination
Pot
0
Pot
1
Pot
2
t
Expo0
t
Expo1tExpo2
t
exposure
Sensor Output
Camera Type
HDR
CCD
VLG-02M.I / VLG-02C.I□
VLG-12M.I / VLG-12C.I□
VLG-20M.I / VLG-20C.I□
CMOS
VLG-22M.I / VLG-22C.I■
VLG-40M.I / VLG-40C.I■
Beside the standard linear response the sensor support a special high dynamic range
mode (HDR) called piecewise linear response. With this mode illuminated pixels that
reach a certain programmable voltage level will be clipped. Darker pixels that do not reach
this threshold remain unchanged. The clipping can be adjusted two times within a single
exposure by conguring the respective time slices and clipping voltage levels. See the
gure below for details.
In this mode, the values for t
The value for t
t
)
Expo1
will be calculated automatically in the camera. (t
Expo2
Expo0
, t
, Pot0 and Pot1can be edited.
Expo1
Expo2
= t
exposure
- t
Expo0
-
HDR OffHDR On
30
Y' = Y
original
γ
H
E0
▲Figure15
Non-linear perception of
the human eye.
H - Perception of bright ness
E - Energy of light
9.1.6 Look-Up-Table
The Look-Up-Table (LUT) is employed on Baumer VisiLine IP monochrome and color
cameras. It contains 212 (4096) values for the available levels. These values can be adjusted by the user.
9.1.7 Gamma Correction
With this feature, Baumer VisiLine IP cameras offer the possibility of compensating nonlinearity in the perception of light by the human eye.
For this correction, the corrected pixel intensity (Y') is calculated from the original intensity
of the sensor's pixel (Y
simplied version):
On Baumer VisiLine IP cameras the correction factor γ is adjustable from 0.001 to 2.
The values of the calculated intensities are entered into the Look-Up-Table (see 9.1.5).
Thereby previously existing values within the LUT will be overwritten.
Notice
If the LUT feature is disabled on the software side, the gamma correction feature is
disabled, too.
) and correction factor γ using the following formula (in over-
original
31
Region of Interest9.1.8
Start ROI
End ROI
Readout lines
With the "Region of Interest" (ROI) function it is possible to predene a so-called Region
of Interest (ROI) or Partial Scan. This ROI is an area of pixels of the sensor. On image
acquisition, only the information of these pixels is sent to the PC. Therefore, not all lines
of the sensor are read out, which decreases the readout time (t
). This increases the
readout
frame rate.
This function is employed, when only a region of the eld of view is of interest. It is coupled
to a reduction in resolution.
The ROI is specied by four values:
▪ Offset X - x-coordinate of the rst relevant pixel▪ Offset Y - y-coordinate of the rst relevant pixel▪ Size X - horizontal size of the ROI▪ Size Y - vertical size of the ROI
9.1.8.1 ROI
ROI Readout
In the illustration below, readout time would be decreased to 40%, compared to a full
frame readout.
◄Figure16
ROI: Parameters
◄Figure17
Decrease in readout
time by using partial
scan.
32
Binning9.1.9
On digital cameras, you can nd several operations for progressing sensitivity. One of
them is the so-called "Binning". Here, the charge carriers of neighboring pixels are aggregated. Thus, the progression is greatly increased by the amount of binned pixels. By using
this operation, the progression in sensitivity is coupled to a reduction in resolution.
Baumer cameras support three types of Binning - vertical, horizontal and bidirectional.
In unidirectional binning, vertically or horizontally neighboring pixels are aggregated and
reported to the software as one single "superpixel".
In bidirectional binning, a square of neighboring pixels is aggregated.
BinningIllustrationExample
Figure18►
Full frame image, no
binning of pixels.
Figure19►
Vertical binning causes
a vertically compressed
image with doubled
brightness.
Figure20►
Horizontal binning
causes a horizontally
compressed image with
doubled brightness.
without
1x2
2x1
Figure21►
Bidirectional binning
causes both a horizontally and vertically
compressed image with
quadruple brightness.
2x2
33
9.1.10 Brightness Correction (Binning Correction)
Charge quantity
Binning 2x2
Super pixel
To tal charge
quantity of the
4 aggregated
pixels
The aggregation of charge carriers may cause an overload. To prevent this, binning correction was introduced. Here, three binning modes need to be considered separately:
BinninigRealization
1x21x2 binning is performed within the sensor, binning correction also takes
place here. A possible overload is prevented by halving the exposure time.
2x12x1 binning takes place within the FPGA of the camera. The binning cor-
rection is realized by aggregating the charge quantities, and then halving
this sum.
2x22x2 binning is a combination of the above versions.
◄Figure22
Aggregation of charge
carriers from four pixels
in bidirectional binning.
34
Flip Image9.1.11
The Flip Image function let you ip the captured images horizontal and/or vertical before
they are transmitted from the camera.
Notice
A dened ROI will also ipped.
Figure23►
Flip image vertical
Camera Type
Horizontal
VLG-02M.I / VLG-02C.I■□
VLG-12M.I / VLG-12C.I■□
VLG-20M.I / VLG-20C.I■□
VLG-22M.I / VLG-22C.I■■
VLG-40M.I / VLG-40C.I■■
NormalFlip vertical
NormalFlip horizontal
Vertical
Figure24►
Flip image horiontal
Figure25►
Flip image horiontal and
vertical
NormalFlip horizontal and vertical
35
9.2 Color Processing
Camera
Module
Bayer
Processor
Color
Transfor
mation
RGB
r
g
b
r'
g'
b'
r''
b''
g''
Y
White balance
non-adjusted
histogramm
histogramm after
user-specific
color adjustment
Baumer color cameras are balanced to a color temperature of 5000 K.
Oversimplied, color processing is realized by 4 modules.
The color signals r (red), g (green) and b (blue) of the sensor are amplied in total and
digitized within the camera module.
Within the Bayer processor, the raw signals r', g' and b' are amplied by using of indepen-
dent factors for each color channel. Then the missing color values are interpolated, which
results in new color values (r'', g'', b''). The luminance signal Y is also generated.
The next step is the color transformation. Here the previously generated color signals r'',
g'' and b'' are converted to the chroma signals U and V, which conform to the standard.
Afterwards theses signals are transformed into the desired output format. Thereby the
following steps are processed simultaneously:
Transformation to color space RGB ▪or YUV
▪ External color adjustment
Color ▪adjustment as physical balance of the spectral sensitivities
◄Figure26
Color processing modules of Baumer color
cameras.
In order to reduce the data rate of YUV signals, a subsampling of the chroma signals can
be carried out. Here the following items can be customized to the desired output format:
Order of data output ▪
Subsampling of the chroma components to ▪YUV 4:2:2 or YUV 4:1:1
Limitation of the data rate to 8 bits ▪
9.3 Color Adjustment – White Balance
This feature is available on all color cameras of the Baumer VisiLine IP series and takes
place within the Bayer processor.
White balance means independent adjustment of the three color channels, red, green and
blue by employing of a correction factor for each channel.
User-specic9.3.1 Color Adjustment
The user-specic color adjustment in Baumer color cameras facilitates adjustment of the
correction factors for each color gain. This way, the user is able to adjust the amplica-
tion of each color channel exactly to his needs. The correction factors for the color gains
range from 1 to 4.
◄Figure27
Examples of histogramms for a nonadjusted image and for
an image after user-
specic white balance..
36
non-adjusted
histogramm
histogramm after
„one push“ white
balance
Figure28►
Examples of histogramms for a non-adjusted image and for an
image after "one push"
white balance.
One Push 9.3.2 White Balance
Here, the three color spectrums are balanced to a single white point. The correction factors of the color gains are determined by the camera (one time).
9.4 Analog Controls
9.4.1 Offset / Black Level
On Baumer VisiLine IP cameras, the offset (or black level) is adjustable from 0 to 255 LSB
(relating to 12 bit).
Camera TypeStep Size 1 LSB
Relating to
Monochrome
VLG-02M.I / VLG-02C.I12 bit
VLG-12M.I / VLG-12C.I12 bit
VLG-20M.I / VLG-20C.I12 bit
Color
VLG-22M.I / VLG-22C.I12 bit
VLG-40M.I / VLG-40C.I12 bit
37
9.4.2 Gain
In industrial environments motion blur is unacceptable. Due to this fact exposure times
are limited. However, this causes low output signals from the camera and results in dark
images. To solve this issue, the signals can be amplied by a user-dened gain factor
within the camera. This gain factor is adjustable.
Notice
Increasing the gain factor causes an increase of image noise.
CCD Sensor
Camera TypeGain factor [db]
Monochrome
VLG-02M.I0...26
VLG-12M.I0...26
VLG-20M.I0...26
Color
VLG-02C.I0...26
VLG-12C.I0...26
VLG-20C.I0...26
CMOS Sensor
Camera TypeGain factor [db]
Monochrome
VLG-22M.I0...18
VLG-40M.I0...18
Color
VLG-22C.I0...18
VLG-40C.I0...18
38
Warm Pixel
Cold Pixel
Charge quantity
„Normal Pixel“
Charge quantity
„Cold Pixel“
Charge quantity
„Warm Pixel“
Figure29►
Distinction of "hot" and
"cold" pixels within the
recorded image.
Pixel Correction9.5
General information9.5.1
A certain probability for abnormal pixels - the so-called defect pixels - applies to the sensors of all manufacturers. The charge quantity on these pixels is not linear-dependent on
the exposure time.
The occurrence of these defect pixels is unavoidable and intrinsic to the manufacturing
and aging process of the sensors.
The operation of the camera is not affected by these pixels. They only appear as brighter
(warm pixel) or darker (cold pixel) spot in the recorded image.
Figure30►
Charge quantity of "hot"
and "cold" pixels compared with "normal"
pixels.
39
Correction Algorithm9.5.2
Defect Pixels
Average Value
Corrected Pixels
On cameras of the Baumer VisiLine IP series, the problem of defect pixels is solved as
follows:
Possible defect pixels are identied during the production process of the camera. ▪
The coordinates of these pixels are stored in the factory settings of the camera. ▪
Once the sensor ▪readout is completed, correction takes place:
Before any other processing, the values of the neighboring pixels on the left and the ▪
right side of the defect pixels, will be read out. (within the same bayer phase for
color)
Then the average value of ▪these 2 pixels is determined to correct the rst defect
pixel
Finally, the value of the second defect pixel is is corrected by using the previously ▪
corrected pixel and the pixel of the other side of the defect pixel.
The correction is able to correct up to two neighboring defect pixels. ▪
9.5.3 Defectpixellist
As stated previously, this list is determined within the production process of Baumer cameras and stored in the factory settings.
Additional hot or cold pixels can develop during the lifecycle of a camera. In this case
Baumer offers the possibility of adding their coordinates to the defectpixellist.
*)
The user can determine the coordinates
Once the defect pixel list is stored in a user set, pixel correction is executed for all coordinates on the defectpixellist.
of the affected pixels and add them to the list.
*) Position in relation to Full Frame Format (Raw Data Format / No ipping).
40
9.6 Process Interface
(Input) Line 1
Trigger
Timer
LineOut 1
LineOut 2
LineOut 3
state high
state low
IO Matrix
state selection
(inverter)
signal selection
(software side)
1
2
3
4
5
6
7
8
9.6.1 Digital IOs
UserDenableInputs9.6.1.1
The wiring of these input connectors is left to the user.
Sole exception is the compliance with predetermined high and low levels (0 .. 4,5V low,
11 .. 30V high).
The dened signals will have no direct effect, but can be analyzed and processed on the
software side and used for controlling the camera.
The employment of a so called "IO matrix" offers the possibility of selecting the signal and
the state to be processed.
On the software side the input signals are named "Trigger", "Timer" and "LineOut 1..3".
With this feature, Baumer offers the possibility of wiring the output connectors to internal
signals, which are controlled on the software side.
Hereby on VisiLine IP cameras, the output connector can be wired to one of provided
internal signal: "Off", "ExposureActive", "Line 0", "Timer 1 … 3", "ReadoutActive", "User0
… 2", "TriggerReady", "TriggerOverlapped", "TriggerSkipped", "Sequencer Output 0 ... 2".
Beside this, the output can be disabled.
9.6.2 IO Circuits
Notice
Low Active: At this wiring, only one consumer can be connected. When all Output pins
(1, 2, 3) connected to IO_GND, then current ows through the resistor as soon as one
Output is switched. If only one output connected to IO_GND, then this one is only usable.
The other two outputs are not usable and may not be connected (e.g. IO Power V
Output high activeOutput low activeInput
◄Figure32
IO matrix of the
Baumer VisiLine IP on
output side.
)!
CC
42
Trigger (valid)
Exposure
Readout
Time
A
B
C
p
h
o
t
o
e
l
e
c
t
r
i
c
s
e
n
s
o
r
t
r
i
g
g
e
r
s
i
g
n
a
l
p
r
o
g
r
a
m
m
a
b
l
e
l
o
g
i
c
c
o
n
t
r
o
l
e
r
o
t
h
e
r
s
s
o
f
t
w
a
r
e
t
r
i
g
g
e
r
H
a
r
d
w
a
r
e
t
r
i
g
g
e
r
b
r
o
a
d
c
a
s
t
▲Figure33
high
low
U
t0
4.5V
11V
30V
Trigger signal, valid for
Baumer cameras.
Figure34►
Camera in trigger
mode:
A - Trigger delay
B - Exposure time
C - Readout time
9.6.3 Trigger
Trigger signals are used to synchronize the camera exposure and a machine cycle or, in
case of a software trigger, to take images at predened time intervals.
Different trigger sources can be used here.
Trigger Delay:
The trigger delay is a
exible user-dened delay
between the given trigger
impulse and the image capture. The delay time can
be set between 0.0 μsec
and 2.0 sec with a stepsize
of 1 μsec. In the case of
multiple triggers during the
delay the triggers will be
stored and delayed, too.
The buffer is able to store
up to 512 trigger
signals during the delay.
Your benets:
No need for a perfect ▪
alignment of an external
trigger sensor
Different objects can be ▪
captured without hardware
changes
9.6.4 Trigger Source
Examples of possible
trigger sources.
Figure35►
Each trigger source has to be activated separately. When the trigger mode is activated,
the hardware trigger is activated by default.
43
9.6.5 Debouncer
low
high
U
t0
4.5V
11V
30V
low
high
U
t0
4.5V
11V
30V
t
∆t
1
∆tx high time of the signal
t
DebounceHigh
user defined debouncer delay for state high
t
DebounceLow
user defined debouncer delay for state low
t
DebounceHigh
∆t
2
∆t
3
∆t4∆t
5
∆t
6
t
DebounceLow
Incoming signals
(valid and invalid)
Debouncer
Filtered signal
The basic idea behind this feature was to seperate interfering signals (short peaks) from
valid square wave signals, which can be important in industrial environments. Debouncing
means that invalid signals are ltered out, and signals lasting longer than a user-dened
testing time t
DebounceHigh
In order to detect the end of a valid signal and lter out possible jitters within the signal, a
second testing time t
If the signal value falls to state low and does not rise within t
as end of the signal.
will be recognized, and routed to the camera to induce a trigger.
DebounceLow
was introduced. This timing is also adjustable by the user.
DebounceLow
, this is recognized
Debouncer:
The debouncing times t
of 1 μsec.
DebounceHigh
and t
DebounceLow
are adjustable from 0 to 5 msec in steps
Please note that the edges
of valid trigger signals are
shifted by t
t
DebounceLow
DebounceHigh
!
and
Depending on these
two timings, the trigger
signal might be temporally
stretched or compressed.
9.6.6 Flash Signal
This signal is managed by exposure of the sensor.
Furthermore, the falling edge of the ash output signal can be used to trigger a movement
of the inspected objects. Due to this fact, the span time used for the sensor readout t
can be used optimally in industrial environments.
readout
44
Exposure
Timer
t
exposure
t
triggerdelay
Trigger
t
TimerDuration
t
TimerDelay
Figure37►
Possible Timer con-
guration on a Baumer
VisiLine
Timers9.6.7
Timers were introduced for advanced control of internal camera signals.
For example the employment of a timer allows you to control the ash signal in that way,
that the illumination does not start synchronized to the sensor exposure but a predened
interval earlier.
On Baumer VisiLine IP cameras the timer conguration includes four components:
ComponentDescription
TimerTriggerSourceThis feature provides a source selection for each timer.
TimerTriggerActivationThis feature selects that part of the trigger signal (edges or
states) that activates the timer.
TimerDelayThis feature represents the interval between incoming trigger
signal and the start of the timer.
TimerDurationBy this feature the activation time of the timer is adjustable.
Flash Delay9.6.7.1
As previously stated, the Timer feature can be used to start the connected illumination
earlier than the sensor exposure.
This implies a timer conguration as follows:
The ash output needs to be wired to the selected internal Timer signal. ▪
Trigger source and trigger activation for the Timer need to be the same as for the ▪
sensor exposure.
The TimerDelay feature (t ▪
delay (t
The duration (t ▪
triggerdelay
).
TimerDuration
) of the timer signal should last until the exposure of the sensor
) needs to be set to a lower value than the trigger
TimerDelay
is completed. This can be realized by using the following formula:
t
TimerDuration
= (t
triggerdelay
– t
TimerDelay
) + t
exposure
9.6.8 Frame Counter
The frame counter is part of the Baumer image infoheader and supplied with every image,
if the chunkmode is activated. It is generated by hardware and can be used to verify that
every image of the camera is transmitted to the PC and received in the right order.
45
9.7 Sequencer
A
A
B
B
C
C
m
o
z
n = 1
n = 2
n = 3
n = 4
n = 5
n = 1
n = 2
n = 3
n = 1n = 2
ABC
z = 2z = 2z = 2z = 2z = 2
General Information9.7.1
A sequencer is used for the automated control of series of images using different sets of
parameters.
◄Figure38
Flow chart of
sequencer.
m - number of loop
passes
n - number of set
repetitions
o - number of
sets of parameters
z - number of frames
per trigger
The gure above displays the fundamental structure of the sequencer module.
A sequence (o) is dened as a complete pass through all sets of parameters.
The loop counter (m) represents the number of sequence repetitions.
The repeat counter (n) is used to control the amount of images taken with the respective
sets of parameters.
The start of the sequencer can be realized directly (free running) or via an external event
(trigger).
The additional frame counter (z) is used to create a half-automated sequencer. It is absolutely independent from the other three counters, and used to determine the number of
frames per external trigger event.
The following timeline displays the temporal course of a sequence with:
n = 5 repetitions per set of parameters ▪
o = 3 sets of parameters (A,B and C) ▪
m = 1 sequence and ▪
z = 2 frames per ▪trigger
Sequencer Parameter:
The mentioned sets of
parameter include the following:
Exposure time ▪
Gain factor ▪
Output line ▪
Origin of ROI (Offset X, Y)▪
◄Figure39
Timeline for a single
sequence
46
Baumer Optronic 9.7.2 Sequencer in Camera xml-le
Sequencer
Start
A
A
B
B
C
C
The Baumer Optronic seqencer is described in the category
The gure above shows an example for a fully automated sequencer with three sets of
parameters (A,B and C). Here the repeat counter (n) is set to 5, the loop counter (m) has
a value of 2.
When the sequencer is started, with or without an external event, the camera will record
5 images successively in each case, using the sets of parameters A, B and C (which constitutes a sequence). After that, the sequence is started once again, followed by a stop of
the sequencer - in this case the parameters are maintained
47
9.7.3.2 Sequencer Controlled by Machine Steps (trigger)
A
A
B
B
C
C
Trigger
Sequencer
Start
The gure above shows an example for a half-automated sequencer with three sets of
parameters (A,B and C) from the previous example. The frame counter (z) is set to 2. This
means the camera records two pictures after an incoming trigger signal.
Capability Characteristics of 9.7.4 Baumer-GAPI Sequencer Module
◄Figure41
Example for a half-auto-
mated sequencer.
up to 128 sets of parameters ▪
up to 65536 loop passes ▪
up to 65536 repetitions of sets of parameters ▪
up to 65536 images per ▪trigger event
free running mode without initial ▪trigger
48
9.7.5 Double Shutter
Trigger
Prevent Light
Exposure
Readout
Flash
This feature offers the possibility of capturing two images in a very short interval. Depend-
ing on the application, this is performed in conjunction with a ash unit. Thereby the rst
exposure time (t
sure time must be equal to, or longer than the readout time (t
pixels of the sensor are recepitve again shortly after the rst exposure. In order to realize
the second short exposure time without an overrun of the sensor, a second short ash
must be employed, and any subsequent extraneous light prevented.
) is arbitrary and accompanied by the rst ash. The second expo-
exposure
) of the sensor. Thus the
readout
Figure42►
Example of a double
shutter.
On Baumer VisiLine IP cameras this feature is realized within the sequencer.
In order to generate this sequence, the sequencer must be congured as follows:
ParameterSetting:
Sequencer Run ModeOnce by Trigger
Sets of parameters (o)2
Loops (m)1
Repeats (n)1
Frames Per Trigger (z)2
Device Reset9.8
The feature Device Reset corresponds to the turn off and turn on of the camera. This is
necessary after a parameterization (e.g. the network data) of the camera.
The interrupt of the power supply ist therefore no longer necessary.
49
9.9 User Sets
1122354
1122454
1122554
1122754
1123154
1123354
1122654
1123054
1123254
Four user sets (0-3) are available for the Baumer cameras of the VisiLine IP series. User
set 0 is the default set and contains the factory settings. User sets 1 to 3 are user-specic
and can contain any user denable parameters.
These user sets are stored within the camera and can be loaded, saved and transferred
to other cameras of the VisiLine IP series.
By employing a so-called "user set default selector", one of the four possible user sets
can be selected as default, which means, the camera starts up with these adjusted parameters.
9.10 Factory Settings
The factory settings are stored in "user set 0" which is the default user set. This is the only
user set, that is not editable.
9.11 Timestamp
The timestamp is part of the GigE Vision® standard. It is 64 bits long and denoted in
Ticks*). Any image or event includes its corresponding timestamp.
At power on or reset, the timestamp starts running from zero.
◄Figure43
Timestamps of recorded
images.
*) Tick is the internal time unit of the camera, it lasts 1 nsec.
50
10. Interface Functionalities
10.1 Device Information
This Gigabit Ethernet-specic information on the device is part of the Discovery-Acknowl-
edge of the camera.
Included information:
▪ MAC address
Current IP conguration (persistent IP / ▪DHCP / LLA)
Current IP parameters ( ▪IP address, subnet mask, gateway)
Manufacturer's name ▪Manufacturer-specic information▪
Device version ▪
Serial number ▪User-dened name (user programmable string)▪
Baumer Image Info Header10.2
The Baumer Image Info Header is a data packet, which is generated by the camera and
integrated in the last data packet of every image, if chunk mode is activated.
Figure44►
Location of the Baumer
Image Info Header
In this integrated data packet are different settings for this image. BGAPI can read the
Image Info Header. Third Party Software, which supports the Chunk mode, can read the
features in the table below. This settings are (not completely):
FeatureDescription
ChunkOffsetXHorizontal offset from the origin to the area of interest (in
pixels).
ChunkOffsetYVertical offset from the origin to the area of interest (in pix-
els).
ChunkWidthReturns the Width of the image included in the payload.
ChunkHeightReturns the Height of the image included in the payload.
ChunkPixelFormatReturns the PixelFormat of the image included in the pay-
load.
ChunkExposureTimeReturns the exposure time used to capture the image.
ChunkBlackLevelSelector Selects which Black Level to retrieve data from.
ChunkBlackLevelReturns the black level used to capture the image included
in the payload.
ChunkFrameIDReturns the unique Identier of the frame (or image) includ-
ed in the payload.
51
10.3 Packet Size and Maximum Transmission Unit (MTU)
Network packets can be of different sizes. The size depends on the network components
employed. When using GigE Vision®- compliant devices, it is generally recommended
to use larger packets. On the one hand the overhead per packet is smaller, on the other
hand larger packets cause less CPU load.
The packet size of UDP packets can differ from 576 Bytes up to the MTU.
The MTU describes the maximal packet size which can be handled by all network components involved.
In principle modern network hardware supports a packet size of 1500 Byte, which is
specied in the GigE network standard. "Jumboframes" merely characterizes a packet
size exceeding 1500 Bytes.
Baumer VisiLine IP cameras can handle a MTU of up to 65535 Bytes.
10.4 Inter Packet Gap
To achieve optimal results in image transfer, several Ethernet-specic factors need to be
considered when using Baumer VisiLine IP cameras.
Upon starting the image transfer of a camera, the data packets are transferred at maximum transfer speed (1 Gbit/sec). In accordance with the network standard, Baumer employs a minimal separation of 12 Bytes between two packets. This separation is called
®
"inter packet gap" (IPG). In addition to the minimal IPG, the GigE Vision
standard stipu-
lates that the IPG be scalable (user-dened).
IPG:
The IPG is measured in
ticks.
An easy rule of thumb is:
1 Tick is equivalent to 4 Bit
of data.
You should also not forget
to add the various ethernet
headers to your calculation.
52
Example 1: Multi Camera Operation – Minimal IPG10.4.1
Setting the IPG to minimum means every image is transfered at maximum speed. Even
by using a frame rate of 1 fps this results in full load on the network. Such "bursts" can
lead to an overload of several network components and a loss of packets. This can occur,
especially when using several cameras.
▲Figure45
Operation of two cameras employing a Gigabit
Ethernet switch.
Data processing within
the switch is displayed
in the next two gures.
Figure46►
Operation of two cameras employing aminimal inter packet
gap (IPG).
In the case of two cameras sending images at the same time, this would theoretically occur at a transfer rate of 2 Gbits/sec. The switch has to buffer this data and transfer it at a
speed of 1 Gbit/sec afterwards. Depending on the internal buffer of the switch, this oper-
ates without any problems up to n cameras (n ≥ 1). More cameras would lead to a loss of
packets. These lost packets can however be saved by employing an appropriate resend
mechanism, but this leads to additional load on the network components.
Max. IPG:
On the Gigabit Ethernet
the max. IPG and the data
packet must not exceed 1
Gbit. Otherwise data packets can be lost.
Figure50►
Operation of two cameras employing an optimal
inter packet gap (IPG).
Example 2: Multi Camera Operation – Optimal IPG10.4.2
A better method is to increase the IPG to a size of
In this way both data packets can be transferred successively (zipper principle), and the
switch does not need to buffer the packets.
53
10.5 Transmission Delay
Another approach for packet sorting in multi-camera operation is the so-called Transmission Delay.
Due to the fact, that the currently recorded image is stored within the camera and its
transmission starts with a predened delay, complete images can be transmitted to the
PC at once.
The following gure should serve as an example:
For the image processing three cameras with different sensor resolutions are employed –
for example camera 1: VLG-12M.I, camera 2: VLG-20M.I, camera 3: VLG-02M.I.
Due to process-related circumstances, the image acquisitions of all cameras end at the
same time. Now the cameras are not trying to transmit their images simultaniously, but –
according to the specied transmission delays – subsequently. Thereby the rst camera
starts the transmission immediately – with a transmission delay "0".
Time Saving in Multi-Camera Operation10.5.1
As previously stated, the transmission delay feature was especially designed for multi-
camera operation with employment of different camera models. Just here an signicant
acceleration of the image transmission can be achieved:
◄Figure47
Principle of the trans-
mission delay.
◄Figure48
Comparison of trans-
mission delay and inter
packet gap, employed
for a multi-camera sys-
tem with different cam-
era models.
For the above mentioned example, the employment of the transmission delay feature
results in a time saving – compared to the approach of using the inter packet gap – of approx. 45% (applied to the transmission of all three images).
54
CongurationExample10.5.2
Camera 1
(TXG13)
Trigger
Camera 2
(TXG06)
Camera 3
(TXG03)
t
exposure(Camera 1)
t
exposure(Camera 2)
t
exposure(Camera 3)
t
readout(Camera 3)
t
transferGigE(Camera 3)
t
readout(Camera 2)
t
transferGigE(Camera 2)
t
readout(Camera 1)
t
transfer(Camera 1)*
TransmissionDelay
Camera 2
TransmissionDelay
Camera 3
For the three employed cameras the following data are known:
Timings:
A - exposure start for all
cameras
B - all cameras ready for
transmission
C - transmission start
camera 2
D - transmission start
camera 3
Camera
Model
Sensor
Resolution
[Pixel]
Pixel Format
(Pixel Depth)
[bit]
Resulting
Data Volume
[bit]
Readout
Time
[msec]
Exposure
Time
[msec]
Transfer
Time (GigE)
[msec]
VLG-12M.I 1288 x 9608989184023.832≈ 9.2
VLG-20M.I 1624 x 12288159541763732≈ 14.9
VLG-02M.I656 x 490825715206.432≈ 2.4
The sensor resolution and the readout time (t ▪
) can be found in the respective
readout
Technical Data Sheet (TDS). For the example a full frame resolution is used.
The exposure time (t ▪
) is manually set to 32 msec.
exposure
The resulting data volume is calculated as follows: ▪
Therewith for the example, the transmission delays of camera 2 and 3 are calculated as
follows:
t
TransmissionDelay(Camera 2)
t
TransmissionDelay(Camera 3)
= t
exposure(Camera 1)
= t
exposure(Camera 1)
+ t
readout(Camera 1)
+ t
readout(Camera 1)
- t
exposure(Camera 2)
- t
exposure(Camera 3)
+ t
transferGige(Camera 2)
Solving this equations leads to:
t
TransmissionDelay(Camera 2)
= 32 msec + 23.8 msec - 32 msec
= 23.8 msec
= 7437750 ticks
t
TransmissionDelay(Camera 3)
= 32 msec + 23.8 msec - 32 msec + 14.9 msec
= 38,7 msec
= 1209375 ticks
Notice
In BGAPI the delay is specied in ticks. How do convert microseconds into ticks?
1 tick = 1 ns
1 msec = 1000000 ns
1 tick = 0,000001 msec
ticks= t
TransmissionDelay
[msec] / 0,000001 = t
TransmissionDelay
[ticks]
56
Multicast Addresses:
For multicasting Baumer
suggests an adress
range from 232.0.1.0 to
232.255.255.255.
Multicast10.6
Multicasting offers the possibility to send data packets to more than one destination address – without multiplying bandwidth between camera and Multicast device (e.g. Router
or Switch).
The data is sent out to an intelligent network node, an IGMP (Internet Group Management
Protocol) capable Switch or Router and distributed to the receiver group with the specic
address range.
In the example on the gure below, multicast is used to process image and message data
separately on two differents PC's.
Figure49►
Principle of Multicast
57
10.7 IPConguration
10.7.1 Persistent IP
A persistent IP adress is assigned permanently. Its validity is unlimited.
Notice
Please ensure a valid combination of IP address and subnet mask.
IP range:Subnet mask:
0.0.0.0 – 127.255.255.255255.0.0.0
128.0.0.0 – 191.255.255.255255.255.0.0
192.0.0.0 – 223.255.255.255255.255.255.0
These combinations are not checked by Baumer-GAPI, Baumer-GAPI Viewer or cam-
era on the y. This check is performed when restarting the camera, in case of an invalid
IP - subnet combination the camera will start in LLA mode.
* This feature is disabled by default.
10.7.2 DHCP(DynamicHostCongurationProtocol)
The DHCP automates the assignment of network parameters such as IP addresses, subnet masks and gateways. This process takes up to 12 sec.
Internet Protocol:
On Baumer cameras IP v4
is employed.
Figure50▲
Connection pathway for
Baumer Gigabit Ethernet cameras:
The device connects
step by step via the
three described mechanisms.
Once the device (client) is connected to a DHCP-enabled network, four steps are processed:
▪ DHCP Discovery
In order to nd a DHCP server, the client sends a so called DHCPDISCOVER broadcast to the network.
▪ DHCP Offer
After reception of this broadcast, the DHCP server will answer the request by an
unicast, known as DHCPOFFER. This message contains several items of information,
such as:
Information for the client
MAC address
offered IP address
IP adress
Information on server
subnet mask
duration of the lease
DHCP:
Please pay attention to the
DHCP Lease Time.
◄Figure51
DHCP Discovery
(broadcast)
◄Figure52
DHCP offer (unicast)
58
Figure53►
DHCP Request
(broadcast)
DHCP Lease Time:
The validity of DHCP IP
addresses is limited by the
lease time. When this time
is elapsed, the IP conguration needs to be redone.
This causes a connection
abort.
▪ DHCP Request
Once the client has received this DHCPOFFER, the transaction needs to be con-
rmed. For this purpose the client sends a so called DHCPREQUEST broadcast to the
network. This message contains the IP address of the offering DHCP server and
informs all other possible DHCPservers that the client has obtained all the necessary
information, and there is therefore no need to issue IP information to the client.
▪ DHCP Acknowledgement
Once the DHCP server obtains the DHCPREQUEST, an unicast containing all neces-
sary information is sent to the client. This message is called DHCPACK.
According to this information, the client will congure its IP parameters and the process is complete.
Figure54►
DHCP Acknowledgement (unicast)
LLA:
Please ensure operation
of the PC within the same
subnet as the camera.
10.7.3 LLA
LLA (Link-Local Address) refers to a local IP range from 169.254.0.1 to 169.254.254.254
and is used for the automated assignment of an IP address to a device when no other
method for IP assignment is available.
The IP address is determined by the host, using a pseudo-random number generator,
which operates in the IP range mentioned above.
Once an address is chosen, this is sent together with an ARP (Address Resolution Protocol) query to the network to to check if it already exists. Depending on the response,
the IP address will be assigned to the device (if not existing) or the process is repeated.
This method may take some time - the GigE Vision
connection in the LLA should not take longer than 40 seconds, in the worst case it can
take up to several minutes.
10.7.4 Force IP
*)
Inadvertent faulty operation may result in connection errors between the PC and the camera.
In this case "Force IP" may be the last resort. The Force IP mechanism sends an IP address and a subnet mask to the MAC address of the camera. These settings are sent
without verication and are adapted immediately by the client. They remain valid until the
camera is de-energized.
®
standard stipulates that establishing
*) In the GigE Vision® standard, this feature is dened as "Static IP".
59
10.8 Packet Resend
Due to the fact, that the GigE Vision® standard stipulates using a UDP - a stateless user
datagram protocol - for data transfer, a mechanism for saving the "lost" data needs to be
employed.
Here, a resend request is initiated if one or more packets are damaged during transfer
and - due to an incorrect checksum - rejected afterwards.
On this topic one must distinguish between three cases:
Normal Case10.8.1
In the case of unproblematic data transfer, all packets are transferred in their correct order
from the camera to the PC. The probability of this happening is more then 99%.
Fault 1: 10.8.2 Lost Packet within Data Stream
If one or more packets are lost within the data stream, this is detected by the fact, that
packet number n is not followed by packet number (n+1). In this case the application
sends a resend request (A). Following this request, the camera sends the next packet and
then resends (B) the lost packet.
In our example packet no. 3 is lost. This fault is detected on packet no. 4, and the resend request triggered. Then the camera sends packet no. 5, followed by resending
packet no. 3.
◄Figure55
Data stream without
damaged or lost pack-
ets.
◄Figure56
Resending lost packets
within the data stream.
Fault 2: 10.8.3 Lost Packet at the End of the Data Stream
In case of a fault at the end of the data stream, the application will wait for incoming pack-
ets for a predened time. When this time has elapsed, the resend request is triggered and
the "lost" packets will be resent.
60
Figure57►
Resending of lost packets at the end of the
data stream.
In our example, packets from no. 3 to no. 5 are lost. This fault is detected after the
predened time has elapsed and the resend request (A) is triggered. The camera then
resends packets no. 3 to no. 5 (B) to complete the image transfer.
Termination Conditions10.8.4
The resend mechanism will continue until:
all packets have reached the pc ▪
the maximum of resend repetitions is reached ▪
the resend timeout has occured or ▪
the camera returns an error. ▪
61
10.9 Message Channel
The asynchronous message channel is described in the GigE Vision® standard and offers the possibility of event signaling. There is a timestamp (64 bits) for each announced
event, which contains the accurate time the event occurred. Each event can be activated
and deactivated separately.
10.9.1 Event Generation
EventDescription
Gen<i>Cam™
ExposureStartExposure started
ExposureEndExposure ended
FrameStartAcquisition of a frame started
FrameEndAcquisition of a frame ended
Line0RisingRising edge detected on IO-Line 0
Line0FallingFalling edge detected on IO-Line 0
Line1RisingRising edge detected on IO-Line 1
Line1FallingFalling edge detected on IO-Line 1
Line2RisingRising edge detected on IO-Line 2
Line2FallingFalling edge detected on IO-Line 2
Line3RisingRising edge detected on IO-Line 3
Line3FallingFalling edge detected on IO-Line 3
Vendor-specic
EventErrorError in event handling
EventLostOccured event not analyzed
TriggerReadyt
TriggerOverlappedOverlapped Mode detected
TriggerSkippedCamera overtriggered
elapsed, camera is able to
notready
process incoming trigger
62
10.10 Action Command / Trigger over Ethernet
The basic idea behind this feature was to achieve a simultaneous trigger for multiple
cameras.
Action Command:
Since hardware release 2.1
the implemetation of the
Action Command follows
the regulations of the GigE
®
Vision
standard 1.2.
Therefore a broadcast ethernet packet was implemented. This packet can be used to
induce a trigger as well as other actions.
Due to the fact that different network components feature different latencies and jitters,
the trigger over the Ethernet is not as synchronous as a hardware trigger. Nevertheless,
applications can deal with these jitters in switched networks, and therefore this is a comfortable method for synchronizing cameras with software additions.
The action command is sent as a broadcast. In addition it is possible to group cameras,
so that not all attached cameras respond to a broadcast action command.
Such an action command contains:
a Device Key - for authorization of the action on this device ▪an Action ID - for identication of the action signal▪
a Group Key - for triggering actions on separated groups of devices ▪
a Group Mask - for extension of the range of separate device groups ▪
Example: Triggering Multiple Cameras10.10.1
The gure below displays three cameras, which are triggered synchronously by a software application.
Figure58►
Triggering of multiple
cameras via trigger over
Ethernet (ToE).
Another application of action command is that a secondary application or PC or one of the
attached cameras can actuate the trigger.
63
11. Start-Stop-Behaviour
Start / Stop / Abort 11.1 Acquisition (Camera)
Once the image acquisition is started, three steps are processed within the camera:
Determination of the current set of image parameters ▪
▪ Exposure of the sensor
Readout of the sensor. ▪
Afterwards a repetition of this process takes place until the camera is stopped.
Stopping the acquisition means that the process mentioned above is aborted. If the stop
signal occurs within a readout, the current readout will be nished before stopping the
camera. If the stop signal arrives within an exposure, this will be aborted.
Abort Acquisition
The acquisition abort represents a special case of stopping the current acquisition.
When an exposure is running, the exposure is aborted immediately and the image is not
read out.
Start / Stop 11.2 Interface
Without starting the interface, transmission of image data from the camera to the PC will
not proceed. If the image acquisition is started before the interface is activated, the recorded images are lost.
If the interface is stopped during a transmission, this is aborted immediately.
11.3 Acquisition Modes
In general, three acquisition modes are available for the cameras in the Baumer VisiLine
IP series.
Free Running11.3.1
Free running means the camera records images continuously without external events.
11.3.2 Trigger
The basic idea behind the trigger mode is the synchronization of cameras with machine
cycles. Trigger mode means that image recording is not continuous, but triggered by
external events.
This feature is described in chapter 4.6. Process Interface.
11.3.3 Sequencer
A sequencer is used for the automated control of series of images, using different settings
for exposure time and gain.
64
Cleaning12.
volatile
solvents
Cover glass
Notice
The sensor is mounted dust-proof. Remove of the cover glass for cleaning is not necessary.
Avoid cleaning the cover glass of the sensor if possible. To prevent dust, follow the instructions under "Install lens".
If you must clean it, use compressed air or a soft, lint free cloth dampened with a small
quantity of pure alcohol.
Housing
Caution!
Volatile solvents for cleaning.
Volatile solvents damage the surface of the camera.
Never use volatile solvents (benzine, thinner) for cleaning!
To clean the surface of the camera housing, use a soft, dry cloth. To remove persistent
stains, use a soft cloth dampened with a small quantity of neutral detergent, then wipe
dry.
Transport / Storage13.
Notice
Transport the camera only in the original packaging. When the camera is not installed,
then storage the camera in original packaging.
Dispose of outdated products with electrical or electronic circuits, not in the
normal domestic waste, but rather according to your national law and the
directives 2002/96/EC and 2006/66/EC for recycling within the competent
collectors.
Through the proper disposal of obsolete equipment will help to save valuable resources and prevent possible adverse effects on human health and
the environment.
The return of the packaging to the material cycle helps conserve raw materials an reduces the production of waste. When no longer required, dispose
of the packaging materials in accordance with the local regulations in force.
Keep the original packaging during the warranty period in order to be able
to pack the device properly in the event of a warranty claim.
65
Warranty Notes15.
Notice
If it is obvious that the device is / was dismantled, reworked or repaired by other than
Baumer technicians, Baumer Optronic will not take any responsibility for the subsequent performance and quality of the device!
Support16.
If you have any problems with the camera, then feel free to contact our support.
Cameras of the Baumer VisiLine IP family comply with:
CE, ▪
FCC Part 15 Class B, ▪
RoHS ▪
CE17.1
We declare, under our sole responsibility, that the previously described Baumer HXC
cameras conform with the directives of the CE.
FCC – Class B Device17.2
No t e : This equipment has been tested and found to comply with the limits for a Class B
digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential environment. This
equipment generates, uses, and can radiate radio frequency energy and, if not installed
and used in accordance with the instructios, may cause harmful interference to radio
communications. However, there is no guarantee that interference will not occure in a
particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off an on, the user is
encouraged to try to correct the interference by one or more of the following measures:
Reorient or relocate the receiving antenna. ▪
Increase the separation between the equipment and the receiver. ▪
Connect the equipment into an outlet on a circuit different from that to which the ▪
receiver is connected.
Consult the dealer or an experienced radio/TV technician for help. ▪