Image processing systems
Vision Sensor SIMATIC VS120
Operating Instructions
Introduction
Safety instructions
Description
Image processing
Network and system
integration
Installation
Connecting
1
2
3
4
5
6
7
Commissioning
Operation
Process interfacing over an
automation system (PLC,
PC)
Alarm, error and system
messages
Technical data
Dimension drawings
Scope of
delivery/Spares/Accessories
8
9
10
11
12
13
14
Edition 02/2006
A5E00757507-01
Service & Support
Directives and declarations
15
16
Safety Guidelines
This manual contains notices you have to observe in order to ensure your personal safety, as well as to prevent
damage to property. The notices referring to your personal safety are highlighted in the manual by a safety alert
symbol, notices referring only to property damage have no safety alert symbol. These notices shown below are
graded according to the degree of danger.
Danger
indicates that death or severe personal injury will result if proper precautions are not taken.
Warning
indicates that death or severe personal injury may result if proper precautions are not taken.
Caution
with a safety alert symbol, indicates that minor personal injury can result if proper precautions are not taken.
Caution
without a safety alert symbol, indicates that property damage can result if proper precautions are not taken.
Notice
indicates that an unintended result or situation can occur if the corresponding information is not taken into
account.
If more than one degree of danger is present, the warning notice representing the highest degree of danger will
be used. A notice warning of injury to persons with a safety alert symbol may also include a warning relating to
property damage.
Qualified Personnel
The device/system may only be set up and used in conjunction with this documentation. Commissioning and
operation of a device/system may only be performed by qualified personnel. Within the context of the safety notes
in this documentation qualified persons are defined as persons who are authorized to commission, ground and
label devices, systems and circuits in accordance with established safety practices and standards.
Prescribed Usage
Note the following:
Warning
This device may only be used for the applications described in the catalog or the technical description and only in
connection with devices or components from other manufacturers which have been approved or recommended
by Siemens. Correct, reliable operation of the product requires proper transport, storage, positioning and
assembly as well as careful operation and maintenance.
Trademarks
All names identified by ® are registered trademarks of the Siemens AG. The remaining trademarks in this
publication may be trademarks whose use by third parties for their own purposes could violate the rights of the
owner.
Disclaimer of Liability
We have reviewed the contents of this publication to ensure consistency with the hardware and software
described. Since variance cannot be precluded entirely, we cannot guarantee full consistency. However, the
information in this publication is reviewed regularly and any necessary corrections are included in subsequent
editions.
Siemens AG
Automation and Drives
Postfach 48 48
90437 NÜRNBERG
GERMANY
10.4.2.1 Control byte............................................................................................................................ 10-11
10.4.2.2 Status byte ............................................................................................................................. 10-12
10.4.2.3 User data interface "Send" VS120 processing unit >>> automation system......................... 10-12
10.4.2.4 User data interface "Receive" automation system >>> VS120 processing unit.................... 10-14
10.4.3 Programming data fragmentation .......................................................................................... 10-14
10.5 Function block FB1 ................................................................................................................ 10-16
11.2 Error messages and error handling ......................................................................................... 11-1
11.3 Diagnostics based on the "BF" LED ...................................................................................... 11-10
11.4 Slave diagnostics or /O device diagnostics ........................................................................... 11-11
12 Technical data ...................................................................................................................................... 12-1
12.1 General technical specifications .............................................................................................. 12-1
12.2 Technical specifications of SIMATIC VS120 ........................................................................... 12-5
12.3 Port assignment of the processing unit.................................................................................... 12-8
14.4 C-mount lens and inspection window size............................................................................... 14-5
15 Service & Support................................................................................................................................. 15-1
This manual contains all the information you require to install, commission and work with the
SIMATIC VS120 Vision Sensor System.
It is intended both for persons configuring and installing automated plants with image
processing systems and for service and maintenance technicians.
Scope of this manual
The manual is valid for all supplied versions of the SIMATIC VS120 Vision Sensor system
and the processing unit with order number (MLFB) 6GF1 018-2AA10.
Caution
Please observe the safety instructions on the back of the cover sheet of this documentation.
You should not make any expansions to your device unless you have read the relevant
safety instructions.
This device meets the relevant safety requirements in compliance with IEC, VDE, and EN. If
you have questions about the validity of the installation in the planned environment, please
contact your service representative.
Repairs
Only authorized personnel are permitted to repair the device.
Warning
Unauthorized opening of and improper repairs to the device may result in substantial
damage to equipment or risk of personal injury to the user.
2
System expansion
Only install system expansions intended for this device. If you install other upgrades, you
may damage the system or violate the safety requirements and regulations for radio
frequency interference suppression. Contact your technical support team or where you
purchased your device to find out which system expansion devices may safely be installed.
Caution
If you install or exchange system expansions and damage your device, the warranty
The SIMATIC VS120 Vision Sensor is used for the optical detection and testing of objects
with lighting from above. The SIMATIC VS120 Vision Sensor checks whether or not the
correct object is being tested, whether or not it is damaged and the position of the object.
The SIMATIC VS120 Vision Sensor returns the following recognition values during object
recognition:
• x coordinate
• y coordinate
• Angle
• Quality rating of the specimen, number of detected parts
This object recognition data is transferred to processing units in automation systems. The
data is processed in the processing units of the automation systems.
The SIMATIC VS120 Vision Sensor is suitable for:
• Recognition of parts in sorting tasks
• Determining the position for Pick & Place applications
3
• Checking the presence and position of objects in production
• Checking position in feed systems, for example with oscillating conveyors, workpiece
holder, conveyor belts, circulating systems, grasper units and robots.
Testing correctness of individual characteristics of the specimen
64 models are available for the recognition of specimens. The SIMATIC VS120 checks
whether or not the individual characteristics of the specimens have the same shape as the in
the trained model.
When specifying the recognition and evaluation areas, avoid shiny surfaces on specimens.
Principle of edge recognition
To recognize image patterns, edges are used. These edges from the images are the
transitions from light to dark or vice versa. A model is created from the sum of the edges
extracted in the image and their arrangement.
Recognition and localization of parts
The SIMATIC VS120 scans specimens and determines the coordinates including the roll
angle and passes them to the control system such as S7, for example, via PROFIBUS DP.
Testing the completeness of a model
The SIMATIC VS120 also checks specimens for completeness. Deviations from the trained
model are detected and the quality values of the evaluation are displayed.
Sorting functions for models and model sets
Depending on the importance of the application, 15 model sets with 64 different models can
be assembled and saved for processing. The models are sorted according to the application
with a controller for processing with the SIMATIC VS120.
You require the following hardware and software components for the SIMATIC VS120 Vision
Sensor system:
Hardware
• SIMATIC VS120 processing unit
• Sensor head with CCD sensor chip for detection of the object
• LED ring flash for SIMATIC VS with degree of protection IP65 (not included in every full
package), for optimum illumination of the object
• Cables:
– Power supply cable
– Lighting cable
– Sensor cable
– DI / DO cable
• Documentation package
Software
– Operating Instructions (compact)
– Documentation CD
You also require the following:
• 24 V DC, 2 A power supply; (20.4...28.8 V DC, safety extra low voltage, SELV).
• PC / PG with the following configuration:
– At least 500 MHz clock frequency
– Graphics card with at least 65536 colors and a resolution of at least 1024 x 768 pixels
– Ethernet port with up to 100 Mbps (protocol: TCP/IP)
• Crossover RJ-45 Ethernet cable for connecting the processing unit and the PC / PG
• Microsoft Windows XP Professional SP1 operating system with Internet Explorer 6.0 as of
SP1
• Microsoft Java VM or Sun Java VM version J2SE 1.4.2_06 or J2SE 5.0 (you will find
more detailed information on the Internet at the following address
http//:www.java.sun.com/J2SE)
To form patterns that can be recognized, edges (transitions from light to dark or vice versa)
from the image are used. Although the algorithm extracts the edges automatically, the user
must make sure that the lighting is ideal for an image with good contrast; in other words, to
create models for recognition, it is essential to use the lighting correctly to achieve an image
with high contrast.
Note
The installation of suitable lighting often involves more time than all the other activities such
as securing the camera, connecting to the PLC, training, setting the correct triggers etc.
together. With metallic surfaces in particular, it is advisable to ask the advice of a lighting
expert due to the possible shine.
Part of object recognition is the recognition of the position of the object in the image.
The starting point is the midpoint of the image to which all coordinates relate. The top left
has the coordinates (-320; 240) and bottom right (320; -240).
4
If the object is not recognized, the position at the top left is output for x / y. The user should
always query whether or not this is an OK / N_OK evaluation and not rely on the x / y
positions!
Correct exposure time (shutter speed) influences the quality of the extracted edges. To
control the exposure time / brightness, you can use the parameters Shutter speed and
Brightness.
The shutter speed / brightness must be set to obtain the optimum contrast. The automatic
exposure control can help to achieve the optimum shutter speed setting.
Below, you will find examples of different shutter speeds and disturbing contours:
Shiny areas on the surfaces of the part cause disturbing edges that must be avoided for a
representative model. In the example shown below, you can see clearly that even the
automatic shutter control can cause bad edges in this case. These edges make recognition
of the parts unreliable since they are often not reproducible.
In the trained model shown here, you can see
unwanted edge lines that reduce quality during
the search and recognition and therefore ought
to be avoided.
If this method cannot be used, the user can do the following:
• Use the erasure function on the edges of the model to enable optimally training for the
contour
This image was manually overexposed. The
contour is ideal and can be clearly recognized.
• Correct the problem by setting the shutter speed offset for automatic exposure control
Other interference affecting object recognition
In addition to the previously mentioned interference, other factors can also have a negative
influence on the search for a pattern.
• Shadows (particularly caused by the depth of the objects)
• Non-uniform lighting
• Geometric distortion by the lens, particularly when the camera is not perpendicular to the
pattern
• Blurring due to motion if the shutter speed is too slow for moving parts
There are functions and parameters in the SIMATIC VS120 Vision Sensor to reduce the
negative effects of such interference in recognizing parts. They help to create optimal edges
from the image to generate patterns.
4.3 4.3 Generating models and detecting orientation
A model is created from the sum of the edges extracted in the image and their arrangement.
To ensure good processing quality, the contours of the model should lie within the ROI
(Region of Interest).
4.3.1 Setting for the Precision parameter
The precision setting is based on the size of the ROIs and recognizable changes in the
specimen. The search for a part in the image is "pyramidal". It starts with a coarse search at
low resolution and finishes with a fine search at high resolution. The Precision parameter
affects the coarse and fine search.
Coarse and fine search
The table shows the start and end values of the resolution during the search process with
the various levels of precision.
Precision level Start value for the resolution
Width x Height (in pixels)
Fine1 320 x 240 640 x480
Fine2 160 x 120 640 x480
Fine3 80 x 60 640 x480
Medium1 80x60 320 x 240
Medium2 40x30 320 x 240
Coarse1 40x30 160 x 120
Coarse2 20x15 160 x 120
End value for the resolution
Width x Height (in pixels)
The precision for determining the position should be set as follows:
• "Fine" for the sub-pixel range
• "Medium" for +/-1 pixel and +/-1°
• "Coarse" for +/- 2 pixels and +/-1°
– The precision for determining the position still depends on the pattern size and the
number of edges found in it and may therefore deviate from the values shown above.
– The angle precision can be increased to < 1° with the "Angle Precision" parameter in
"Options - Extras tab".
Note
If the setting is "Fine1" and the model is large, the processing times may be several
Note
If exposure is set to "Manual" and the user changes the precision in the adjustment
support (adjust sensor), the "shutter speed" exposure parameter is adjusted
automatically. Depending on the distance of the object to the camera, this can cause
inaccuracies.
Example for declaring the Precision parameter
A wall is hung full of A4 sheets on which various texts have been printed. An observer has
the task of finding a specific sheet among all the others.
Procedure:
• To accelerate the search, the observer stands at a considerable distance from the wall.
The distance from the wall selected by the observer depends on the criteria on which the
search is based, among other factors.
• The observer begins to presort all the sheets. If the observer is looking for a large rough
drawing, they will stand a long way away to be able to see all drawings at the same time.
In this case, the observer would select "Fine3".
• If the observer concentrates on details, such as text format or heading, he would move
closer. Since he is examining more details, the search takes correspondingly longer. In
this case, the observer would select Fine2 or Fine1.
• Once the observer has made a rough selection, he moves closer to the sheets and
investigates each sheet in detail. He now exactly compares individual words and image
details with a reference sheet. The observer no longer examines each sheet in detail
because he has already limited the selection.
The algorithm of the SIMATIC VS120 Vision Sensor works in much the same way as the
example described above.
4.3.2 Measures for optimizing object recognition
Problem: Object was not trained
If the object could not be trained, the reason may be that there were not enough contours in
the selected ROI.
Remedy
• Make sure that the ROI is selected correctly (position and size) and that the object is
within the ROI when training.
• If this problem still occurs, the object to be trained has too few contours. In this case, a
change in the setting of the precision towards greater precision might help, for example,
from Medium2 to Medium1 or to Fine1.
• If these measures still do not lead to success, try the following for example:
– Select other lighting
– Specify other more detailed object regions in the ROI
– Enlarge the ROI or similar
• Another remedy is to change the brightness with high contrast in order to clearly detect
the change in the image.
4.4 4.4 Quality of the measured values
All the displayed measured values for imaging geometry of a read model are subject to the
following inaccuracies.
Processing precision
• for the position (x and y coordinates): up to ± 0.1 pixels
• for the angle (angle precision): up to ± 0,2°
The processing precision is influenced by the following factors:
• Lighting effects such as reflection and shadow
• Perspective distortion, such when the camera is too close to or at too oblique an angle to
the object
• Differences in the object, for example, dirty objects
• Variation in the trained background structure
Fluctuations in size
Fluctuations in size in the image up to ± 10 % are tolerated, if the specimens are the same
position as the trained pattern. These fluctuations can be caused by the following:
• Different distances between specimens and the lens caused by a different position on the
conveyor belt or workpiece holder
• Different pattern sizes in the specimen
Perspective distortion
• Perspective distortion in the recorded image are tolerated if the specimens have the
same orientation as the trained pattern.
• If there is perspective distortion and the orientation is different, no general statement is
possible. In this case, the shape of the specimens and the angle between the camera
level and pattern level are the factors that determine whether or not the specimens can
be recognized.
The following table shows which actual length corresponds to the side length of a pixel.
Remember that this value applies only for the specified image width. The sensor heads
6GF2 002-8DA (SIMATIC VS120 for large specimens) and 6GF2 002-8EA (SIMATIC VS120
for small specimens) were based on the maximum possible image widths.
Graphic
width
SIMATIC VS120
for large specimens
SIMATIC VS120
for small specimens
C/CS mount 12 mm 12/640 = 0.02 mm / pixel 12/320 = 0.04 mm / pixel
70 mm 70/640 = 0.11 mm / pixel 70/320 = 0,22 mm / pixel
40 mm 40/640 = 0,06 mm / pixel 40/320 = 0,13 mm / pixel
Resolution per pixel
at 640*480
Resolution per pixel
at 320*240
4.5 4.5 Geometric distortion
Geometric distortion caused by the lens is compensated. With sensor heads with fixed
lenses, the value of the distortion is set automatically and should no longer be changed. If
standard lenses with a C mount are used, the user can make the compensation manually by
changing the parameters.
4.6 4.6 Main ROI and sub-ROI
Processing with main ROIs is usually sufficient to evaluate the image. ROIs (Region of
Interest) are used to distinguish a part from the background better. The sub-ROI option
added to the main ROI allows certain details of patterns, which would otherwise be
indistinguishable in comparison to the total contour, to be weighted more heavily. Testing for
damage or completeness are examples of this.
This is, for example, the case if you have shiny areas or variable areas in the object. Using
sub-ROIs, you can concentrate the search and the evaluation on the important
characteristics and suppress irrelevant ones.
Procedure
1. Training of the main ROI concentrating on the invariable characteristics of the specimen
2. Select the "ROI: New" button the dialog "Training - ROI tab" of the adjustment support. A
rectangle or circle appears on the screen depending on the shape selected for the subROI.
3. Changing the size and position of the sub-ROI in the same way you define the main ROI
Task description: The task is to check whether the Siemens logo was printed completely.
In the image on the left, you can see
the edges marked for sub-ROI3. The
main ROI is the large window, while
sub-ROI1 encompasses "SIE" and
sub-ROI2 encompasses "ME".
Parameter assignment
Parameter name Main ROI Sub-ROI 1, 2 and 3
Task Find (default) Find (default)
scaling Fixed Fixed (default)
Precision Fine3 Fine1
Model. type Edges (default) Edges (default)
• The sub-ROIs can be set with the precision Fine1 since the pattern windows are small.
This ensures that no details are lost.
• Fine3 should, however, be selected for the main ROI otherwise the processing time will
take too long. In this case, the selection of the precision (Fine1, Fine2 or Fine3) has no
effect on the quality value of the result.