Photon Focus MV1-D1312(IE)-G2, DR1-D1312(IE)-G2 User Manual

User Manual
MV1-D1312(IE)-G2 / DR1-D1312(IE)-G2
Gigabit Ethernet Series
CMOS Area Scan Camera
MAN049 05/2014 V1.4
All information provided in this manual is believed to be accurate and reliable. No responsibility is assumed by Photonfocus AG for its use. Photonfocus AG reserves the right to make changes to this information without notice.
Reproduction of this manual in whole or in part, by any means, is prohibited without prior permission having been obtained from Photonfocus AG.
1
2
Contents
1 Preface 7
1.1 About Photonfocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Sales Offices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 How to get started (GigE G2) 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Hardware Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Network Adapter Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Network Adapter Configuration for Pleora eBUS SDK . . . . . . . . . . . . . . . . . . 17
2.6 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Product Specification 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Feature Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Technical Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 RGB Bayer Pattern Filter (colour models only) . . . . . . . . . . . . . . . . . . . . . . . 32
4 Functionality 33
4.1 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.1 Readout Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.2 Readout Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.3 Exposure Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.4 Maximum Frame Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Pixel Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.1 Linear Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 LinLog®. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Reduction of Image Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.1 Region of Interest (ROI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.2 ROI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3.3 Calculation of the maximum frame rate . . . . . . . . . . . . . . . . . . . . . . 50
4.3.4 Multiple Regions of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.5 Decimation (monochrome models only) . . . . . . . . . . . . . . . . . . . . . . 54
4.4 Trigger and Strobe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.2 Trigger Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.3 Trigger and AcquisitionMode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.4 Exposure Time Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4.5 Trigger Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
CONTENTS 3
CONTENTS
4.4.6 Burst Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.7 Software Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.8 Strobe Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5 Data Path Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.6 Image Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6.2 Offset Correction (FPN, Hot Pixels) . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6.3 Gain Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.6.4 Corrected Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Digital Gain and Offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.8 Grey Level Transformation (LUT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.8.1 Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.8.2 Gamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.8.3 User-defined Look-up Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.8.4 Region LUT and LUT Enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.9 Convolver (monochrome models only) . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.9.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.9.2 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.9.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.10 Crosshairs (monochrome models only) . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.10.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.11 Image Information and Status Line (not available for DR1-D1312(IE)) . . . . . . . . . 87
4.11.1 Counters and Average Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.11.2 Status Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.12 Test Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.12.1 Ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.12.2 LFSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.12.3 Troubleshooting using the LFSR . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.13 Double Rate (DR1-D1312(IE) only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5 Hardware Interface 95
5.1 GigE Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Power Supply Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3 Status Indicator (GigE cameras) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4 Power and Ground Connection for GigE G2 Cameras . . . . . . . . . . . . . . . . . . 96
5.5 Trigger and Strobe Signals for GigE G2 Cameras . . . . . . . . . . . . . . . . . . . . . 98
5.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.5.2 Single-ended Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.5.3 Single-ended Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5.4 Differential RS-422 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5.5 Master / Slave Camera Connection . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.6 PLC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6 Software 105
6.1 Software for Photonfocus GigE Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.2 PF_GEVPlayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.2.1 PF_GEVPlayer main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.2 GEV Control Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.3 Display Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2.4 White Balance (Colour cameras only) . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2.5 Save camera setting to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2.6 Get feature list of camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.3 Pleora SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4
6.4 Frequently used properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.5 Calibration of the FPN Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.5.1 Offset Correction (CalibrateBlack) . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.5.2 Gain Correction (CalibrateGrey) . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.5.3 Storing the calibration in permanent memory . . . . . . . . . . . . . . . . . . 111
6.6 Look-Up Table (LUT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.6.2 Full ROI LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.6.3 Region LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.6.4 User defined LUT settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.6.5 Predefined LUT settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.7 MROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.8 Permanent Parameter Storage / Factory Reset . . . . . . . . . . . . . . . . . . . . . . 113
6.9 Persistent IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.10 PLC Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.10.2 PLC Settings for ISO_IN0 to PLC_Q4 Camera Trigger . . . . . . . . . . . . . . . 116
6.11 Miscellaneous Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.11.1 DeviceTemperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.11.2 PixelFormat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.11.3 Colour Fine Gain (Colour cameras only) . . . . . . . . . . . . . . . . . . . . . . 117
6.12 Width setting in DR1 cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.13 Decoding of images in DR1 cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.13.1 Status line in DR1 cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.14 DR1Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7 Mechanical and Optical Considerations 121
7.1 Mechanical Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.1.1 Cameras with GigE Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.2 Optical Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.2.1 Cleaning the Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3 Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
8 Warranty 125
8.1 Warranty Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Warranty Claim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9 References 127
A Pinouts 129
A.1 Power Supply Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
B Revision History 131
CONTENTS 5
CONTENTS
6
1
Preface
1.1 About Photonfocus
The Swiss company Photonfocus is one of the leading specialists in the development of CMOS image sensors and corresponding industrial cameras for machine vision, security & surveillance and automotive markets.
Photonfocus is dedicated to making the latest generation of CMOS technology commercially available. Active Pixel Sensor (APS) and global shutter technologies enable high speed and high dynamic range (120 dB) applications, while avoiding disadvantages like image lag, blooming and smear.
Photonfocus has proven that the image quality of modern CMOS sensors is now appropriate for demanding applications. Photonfocus’ product range is complemented by custom design solutions in the area of camera electronics and CMOS image sensors.
Photonfocus is ISO 9001 certified. All products are produced with the latest techniques in order to ensure the highest degree of quality.
1.2 Contact
Photonfocus AG, Bahnhofplatz 10, CH-8853 Lachen SZ, Switzerland
Sales Phone: +41 55 451 07 45 Email: sales@photonfocus.com
Support Phone: +41 55 451 01 37 Email: support@photonfocus.com
Table 1.1: Photonfocus Contact
1.3 Sales Offices
Photonfocus products are available through an extensive international distribution network and through our key account managers. Details of the distributor nearest you and contacts to our key account managers can be found at www.photonfocus.com.
1.4 Further information
Photonfocus reserves the right to make changes to its products and documenta­tion without notice. Photonfocus products are neither intended nor certified for use in life support systems or in other critical systems. The use of Photonfocus products in such applications is prohibited.
Photonfocus is a trademark and LinLog®is a registered trademark of Photonfo­cus AG. CameraLink®and GigE Vision®are a registered mark of the Automated Imaging Association. Product and company names mentioned herein are trade­marks or trade names of their respective companies.
7
1 Preface
Reproduction of this manual in whole or in part, by any means, is prohibited without prior permission having been obtained from Photonfocus AG.
Photonfocus can not be held responsible for any technical or typographical er­rors.
1.5 Legend
In this documentation the reader’s attention is drawn to the following icons:
Important note
Alerts and additional information
Attention, critical warning
Notification, user guide
8
2
How to get started (GigE G2)
2.1 Introduction
This guide shows you:
How to install the required hardware (see Section 2.2)
How to install the required software (see Section 2.3) and configure the Network Adapter Card (see Section 2.4 and Section 2.5)
How to acquire your first images and how to modify camera settings (see Section 2.6)
A Starter Guide [MAN051] can be downloaded from the Photonfocus support page. It describes how to access Photonfocus GigE cameras from various third-party tools.
2.2 Hardware Installation
The hardware installation that is required for this guide is described in this section.
The following hardware is required:
PC with Microsoft Windows OS (XP, Vista, Windows 7)
A Gigabit Ethernet network interface card (NIC) must be installed in the PC. The NIC should support jumbo frames of at least 9014 bytes. In this guide the Intel PRO/1000 GT desktop adapter is used. The descriptions in the following chapters assume that such a network interface card (NIC) is installed. The latest drivers for this NIC must be installed.
Photonfocus GigE camera.
Suitable power supply for the camera (see in the camera manual for specification) which can be ordered from your Photonfocus dealership.
GigE cable of at least Cat 5E or 6.
Photonfocus GigE cameras can also be used under Linux.
Photonfocus GigE cameras work also with network adapters other than the Intel PRO/1000 GT. The GigE network adapter should support Jumbo frames.
Do not bend GigE cables too much. Excess stress on the cable results in transmis­sion errors. In robots applications, the stress that is applied to the GigE cable is especially high due to the fast movement of the robot arm. For such applications, special drag chain capable cables are available.
The following list describes the connection of the camera to the PC (see in the camera manual for more information):
9
2 How to get started (GigE G2)
1. Remove the Photonfocus GigE camera from its packaging. Please make sure the following items are included with your camera:
Power supply connector
Camera body cap
If any items are missing or damaged, please contact your dealership.
2. Connect the camera to the GigE interface of your PC with a GigE cable of at least Cat 5E or
6.
P o w e r S u p p l y a n d I / O C o n n e c t o r
S t a t u s L E D
E t h e r n e t J a c k ( R J 4 5 )
Figure 2.1: Rear view of a Photonfocus GigE camera with power supply and I/O connector, Ethernet jack (RJ45) and status LED
3. Connect a suitable power supply to the power plug. The pin out of the connector is shown in the camera manual.
Check the correct supply voltage and polarity! Do not exceed the operating voltage range of the camera.
A suitable power supply can be ordered from your Photonfocus dealership.
4. Connect the power supply to the camera (see Fig. 2.1).
.
10
2.3 Software Installation
This section describes the installation of the required software to accomplish the tasks described in this chapter.
1. Install the latest drivers for your GigE network interface card.
2. Download the latest eBUS SDK installation file from the Photonfocus server.
You can find the latest version of the eBUS SDK on the support (Software Down­load) page at www.photonfocus.com.
3. Install the eBUS SDK software by double-clicking on the installation file. Please follow the instructions of the installation wizard. A window might be displayed warning that the software has not passed Windows Logo testing. You can safely ignore this warning and click on Continue Anyway. If at the end of the installation you are asked to restart the computer, please click on Yes to restart the computer before proceeding.
4. After the computer has been restarted, open the eBUS Driver Installation tool (Start -> All Programs -> eBUS SDK -> Tools -> Driver Installation Tool) (see Fig. 2.2). If there is more than one Ethernet network card installed then select the network card where your Photonfocus GigE camera is connected. In the Action drop-down list select Install eBUS Universal Pro Driver and start the installation by clicking on the Install button. Close the eBUS Driver Installation Tool after the installation has been completed. Please restart the computer if the program asks you to do so.
Figure 2.2: eBUS Driver Installation Tool
5. Download the latest PFInstaller from the Photonfocus server.
6. Install the PFInstaller by double-clicking on the file. In the Select Components (see Fig. 2.3) dialog check PF_GEVPlayer and doc for GigE cameras. For DR1 cameras select additionally DR1 support and 3rd Party Tools. For 3D cameras additionally select PF3DSuite2 and SDK.
.
2.3 Software Installation 11
2 How to get started (GigE G2)
Figure 2.3: PFInstaller components choice
12
2.4 Network Adapter Configuration
This section describes recommended network adapter card (NIC) settings that enhance the performance for GigEVision. Additional tool-specific settings are described in the tool chapter.
1. Open the Network Connections window (Control Panel -> Network and Internet Connections -> Network Connections), right click on the name of the network adapter where the Photonfocus camera is connected and select Properties from the drop down menu that appears.
Figure 2.4: Local Area Connection Properties
.
2.4 Network Adapter Configuration 13
2 How to get started (GigE G2)
2. By default, Photonfocus GigE Vision cameras are configured to obtain an IP address automatically. For this quick start guide it is recommended to configure the network adapter to obtain an IP address automatically. To do this, select Internet Protocol (TCP/IP) (see Fig. 2.4), click the Properties button and select Obtain an IP address automatically (see Fig. 2.5).
Figure 2.5: TCP/IP Properties
.
14
3. Open again the Local Area Connection Properties window (see Fig. 2.4) and click on the Configure button. In the window that appears click on the Advanced tab and click on Jumbo Frames in the Settings list (see Fig. 2.6). The highest number gives the best performance. Some tools however don’t support the value 16128. For this guide it is recommended to select 9014 Bytes in the Value list.
Figure 2.6: Advanced Network Adapter Properties
.
2.4 Network Adapter Configuration 15
2 How to get started (GigE G2)
4. No firewall should be active on the network adapter where the Photonfocus GigE camera is connected. If the Windows Firewall is used then it can be switched off like this: Open the Windows Firewall configuration (Start -> Control Panel -> Network and Internet Connections -> Windows Firewall) and click on the Advanced tab. Uncheck the network where your camera is connected in the Network Connection Settings (see Fig. 2.7).
Figure 2.7: Windows Firewall Configuration
.
16
2.5 Network Adapter Configuration for Pleora eBUS SDK
Open the Network Connections window (Control Panel -> Network and Internet Connections -> Network Connections), right click on the name of the network adapter where the Photonfocus
camera is connected and select Properties from the drop down menu that appears. A Properties window will open. Check the eBUS Universal Pro Driver (see Fig. 2.8) for maximal performance. Recommended settings for the Network Adapter Card are described in Section
2.4.
Figure 2.8: Local Area Connection Properties
.
2.5 Network Adapter Configuration for Pleora eBUS SDK 17
2 How to get started (GigE G2)
2.6 Getting started
This section describes how to acquire images from the camera and how to modify camera settings.
1. Open the PF_GEVPlayer software (Start -> All Programs -> Photonfocus -> GigE_Tools -> PF_GEVPlayer) which is a GUI to set camera parameters and to see the grabbed images (see Fig. 2.9).
Figure 2.9: PF_GEVPlayer start screen
.
18
2. Click on the Select / Connect button in the PF_GEVPlayer . A window with all detected devices appears (see Fig. 2.10). If your camera is not listed then select the box Show unreachable GigE Vision Devices.
Figure 2.10: GEV Device Selection Procedure displaying the selected camera
3. Select camera model to configure and click on Set IP Address....
Figure 2.11: GEV Device Selection Procedure displaying GigE Vision Device Information
.
2.6 Getting started 19
2 How to get started (GigE G2)
4. Select a valid IP address for selected camera (see Fig. 2.12). There should be no exclamation mark on the right side of the IP address. Click on Ok in the Set IP Address dialog. Select the camera in the GEV Device Selection dialog and click on Ok.
Figure 2.12: Setting IP address
5. Finish the configuration process and connect the camera to PF_GEVPlayer .
Figure 2.13: PF_GEVPlayer is readily configured
6. The camera is now connected to the PF_GEVPlayer . Click on the Play button to grab images.
An additional check box DR1 appears for DR1 cameras. The camera is in dou­ble rate mode if this check box is checked. The demodulation is done in the PF_GEVPlayer software. If the check box is not checked, then the camera out­puts an unmodulated image and the frame rate will be lower than in double rate mode.
20
If no images can be grabbed, close the PF_GEVPlayer and adjust the Jumbo Frame parameter (see Section 2.3) to a lower value and try again.
Figure 2.14: PF_GEVPlayer displaying live image stream
7. Check the status LED on the rear of the camera.
The status LED light is green when an image is being acquired, and it is red when serial communication is active.
8. Camera parameters can be modified by clicking on GEV Device control (see Fig. 2.15). The visibility option Beginner shows most the basic parameters and hides the more advanced parameters. If you don’t have previous experience with Photonfocus GigE cameras, it is recommended to use Beginner level.
Figure 2.15: Control settings on the camera
2.6 Getting started 21
2 How to get started (GigE G2)
9. To modify the exposure time scroll down to the AcquisitionControl control category (bold title) and modify the value of the ExposureTime property.
22
3
Product Specification
3.1 Introduction
The MV1-D1312(IE/C)-G2 and DR1-D1312(IE)-200-G2 CMOS camera series are built around the A1312(IE/C) CMOS image sensor from Photonfocus, that provides a resolution of 1312 x 1082 pixels at a wide range of spectral sensitivity. There are standard monochrome, NIR enhanced monochrome (IE) and colour (C) models. The camera series is aimed at standard applications in industrial image processing. The principal advantages are:
Resolution of 1312 x 1082 pixels.
Wide spectral sensitivity from 320 nm to 1030 nm for monochrome models.
Enhanced near infrared (NIR) sensitivity with the A1312IE CMOS image sensor.
High quantum efficiency: > 50% for monochrome models and between 25% and 45% for colour models.
High pixel fill factor (> 60%).
Superior signal-to-noise ratio (SNR).
Low power consumption at high speeds.
Very high resistance to blooming.
High dynamic range of up to 120 dB.
Ideal for high speed applications: Global shutter.
Image resolution of up to 12 bit.
On camera shading correction.
3x3 Convolver for image pre-processing included on camera (monochrome models only).
Up to 512 regions of interest (MROI).
2 look-up tables (12-to-8 bit) on user-defined image regions (Region-LUT).
Crosshairs overlay on the image (monochrome models only).
Image information and camera settings inside the image (status line) (not available in DR1-D1312(IE)-200).
Software provided for setting and storage of camera parameters.
The camera has a Gigabit Ethernet interface.
GigE Vision and GenICam compliant.
The DR1-D1312(IE)-200 camera uses a proprietary modulation algorithm to double the maximal frame rate compared to the MV1-D1312(IE/C)-100 camera.
The compact size of 60 x 60 x 51 mm3makes the MV1-D1312(IE/C) and DR1-D1312(IE) CMOS cameras the perfect solution for applications in which space is at a premium.
Advanced I/O capabilities: 2 isolated trigger inputs, 2 differential isolated RS-422 inputs and 2 isolated outputs.
Programmable Logic Controller (PLC) for powerful operations on input and output signals.
Wide power input range from 12 V (-10 %) to 24V (+10 %).
23
3 Product Specification
The general specification and features of the camera are listed in the following sections.
The G2 postfix in the camera name indicates that it is the second release of Pho­tonfocus GigE cameras. The first release had the postfix GB and is not recom­mended for new designs. The main advantages of the G2 release compared with the GB release are the smaller size, better I/O capabilities and the support of 24 V voltage supply.
Figure 3.1: MV1-D1312(IE/C)-G2 and DR1-D1312(IE)-G2 cameras are GenICam compliant
Figure 3.2: MV1-D1312(IE/C)-G2 and DR1-D1312(IE)-G2 cameras are GigE Vision compliant
.
24
3.2 Feature Overview
Characteristics MV1-D1312(IE/C) and DR1-D1312(IE) Series
Interface Gigabit Ethernet
Camera Control GigE Vision Suite
Trigger Modes Software Trigger / External isolated trigger input / PLC Trigger
Features Greyscale resolution 12 bit / 10 bit / 8 bit (DR1-D1312(IE): 8 bit only)
Region of Interest (ROI)
Test pattern (LFSR and grey level ramp)
Shading Correction (Offset and Gain)
3x3 Convolver included on camera (monochrome models only)
High blooming resistance
isolated trigger input and isolated strobe output
2 look-up tables (12-to-8 bit) on user-defined image region (Region-LUT)
Up to 512 regions of interest (MROI)
Image information and camera settings inside the image (status line) (not available for DR1-D1312(IE))
Crosshairs overlay on the image (monochrome models only)
Table 3.1: Feature overview (see Chapter 4 for more information)
Figure 3.3: MV1-D1312(IE/C) and DR1-D1312(IE) CMOS camera with C-mount lens
.
3.2 Feature Overview 25
3 Product Specification
3.3 Technical Specification
Technical Parameters MV1-D1312(IE/C) and DR1-D1312(IE) Series
Technology CMOS active pixel (APS)
Scanning system Progressive scan
Optical format / diagonal 1” (13.6 mm diagonal) @ maximum
resolution
2/3” (11.6 mm diagonal) @ 1024 x 1024 resolution
Resolution 1312 x 1082 pixels
Pixel size 8 µm x 8 µm
Active optical area 10.48 mm x 8.64 mm (maximum)
Random noise < 0.3 DN @ 8 bit
1)
Fixed pattern noise (FPN) 3.4 DN @ 8 bit / correction OFF
1)
Fixed pattern noise (FPN) < 1DN @ 8 bit / correction ON
1)2)
Dark current MV1-D1312 and DR1-D1312 0.65 fA / pixel @ 27 °C
Dark current MV1-D1312IE and DR1-D1312IE 0.79 fA / pixel @ 27 °C
Full well capacity ~ 90 ke
Spectral range MV1-D1312 and DR1-D1312 350 nm ... 980 nm (see Fig. 3.4)
Spectral range MV1-D1312IE and DR1-D1312IE 320 nm ... 1000 nm (see Fig. 3.5)
Spectral range MV1-D1312C 390 to 670 nm (to 10% of peak
responsivity) (see Fig. 3.4)
Responsivity MV1-D1312 and DR1-D1312 295 x103DN/(J/m2) @ 670 nm / 8 bit
Responsivity MV1-D1312IE and DR1-D1312IE 305 x103DN/(J/m2) @ 870 nm / 8 bit
Responsivity MV1-D1312C 190 x103DN/(J/m2) @ 625 nm / 8 bit / gain =
1
(approximately 560 DN / (lux s) @ 625 nm / 8 Bit / gain = 1)
(see Fig. 3.4)
Quantum Efficiency > 50 %4, > 40 %
5
Optical fill factor > 60 %
Dynamic range 60 dB in linear mode, 120 dB with LinLog
®
Colour format (colour models) RGB Bayer Raw Data Pattern
Characteristic curve Linear, LinLog
® 4)
Shutter mode Global shutter
Greyscale resolution 12 bit / 10 bit / 8 bit
3)
Table 3.2: General specification of the MV1-D1312(IE/C) and DR1-D1312(IE) camera series (Footnotes:
1)
Indicated values are typical values.2)Indicated values are subject to confirmation.3)DR1-D1312(IE): 8
bit only.4)monochrome models only.5)colour models only)
26
MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80
Exposure Time 10 µs ... 1.67 s 10 µs ... 0.83 s
Exposure time increment 100 ns 50 ns
Frame rate5)( T
int
= 10 µs) 27 fps @ 8 bit 54 fps @ 8 bit
Pixel clock frequency 40 MHz 40 MHz
Pixel clock cycle 25 ns 25 ns
Read out mode sequential or simultaneous
Table 3.3: Model-specific parameters (Footnotes:5)Maximum frame rate @ full resolution @ 8 bit).
MV1-D1312(IE/C)-100 DR1-D1312(IE)-200
Exposure Time 10 µs ... 0.67 s 10 µs ... 0.33 s
Exposure time increment 40 ns 20 ns
Frame rate5)( T
int
= 10 µs) 67 fps @ 8 bit 135 fps @ 8 bit
6)
Pixel clock frequency 50 MHz 50 MHz
Pixel clock cycle 20 ns 20 ns
Read out mode sequential or simultaneous
Table 3.4: Model-specific parameters (Footnotes:5)Maximum frame rate @ full resolution @ 8 bit.6)double rate mode enabled).
MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80
Operating temperature / moisture 0°C ... 50°C / 20 ... 80 %
Storage temperature / moisture -25°C ... 60°C / 20 ... 95 %
Camera power supply +12 V DC (- 10 %) ... +24 V DC (+ 10 %)
Trigger signal input range +5 .. +30 V DC
Max. power consumption @ 12 V < 4.4 W < 4.8 W
Lens mount C-Mount (CS-Mount optional)
Dimensions 60 x 60 x 51 mm
3
Mass 310 g
Conformity CE / RoHS / WEE
Table 3.5: Physical characteristics and operating ranges of the MV1-D1312(IE/C)-40 and MV1-D1312(IE/C)-80 cameras
.
3.3 Technical Specification 27
3 Product Specification
MV1-D1312(IE/C)-100 DR1-D1312(IE)-200
Operating temperature / moisture 0°C ... 50°C / 20 ... 80 %
Storage temperature / moisture -25°C ... 60°C / 20 ... 95 %
Camera power supply +12 V DC (- 10 %) ... +24 V DC (+ 10 %)
Trigger signal input range +5 .. +15 V DC
Max. power consumption @ 12 V < 4.9 W < 5.8 W
Lens mount C-Mount (CS-Mount optional)
Dimensions 60 x 60 x 51 mm
3
Mass 310 g
Conformity CE / RoHS / WEE
Table 3.6: Physical characteristics and operating ranges of the MV1-D1312(IE/C)-100 and DR1-D1312(IE)-200 cameras
Fig. 3.4 shows the quantum efficiency and the responsivity of the monochrome A1312 CMOS sensor, displayed as a function of wavelength. For more information on photometric and radiometric measurements see the Photonfocus application note AN008 available in the support area of our website www.photonfocus.com.
800
1000
1200
40%
50%
60%
/J/m²]
Efficiency
QE Responsivity
0
200
400
600
0%
10%
20%
30%
200 300 400 500 600 700 800 900 1000 1100
Responsivity [
V
Quantu
m
Wavelength [nm]
Figure 3.4: Spectral response of the A1312 CMOS monochrome image sensor (standard) in the MV1-D1312 and DR1-D1312 camera series
28
Fig. 3.5 shows the quantum efficiency and the responsivity of the monochrome A1312IE CMOS sensor, displayed as a function of wavelength. The enhancement in the NIR quantum efficiency could be used to realize applications in the 900 to 1064 nm region.
0%
10%
20%
30%
40%
50%
60%
300 400 500 600 700 800 900 1000 1100
Wavelength [nm]
Quantum Efficiency
0
200
400
600
800
1000
1200
Responsivity [V/J/m^2]
QE [%]
Responsivity [V/W/m^2]
Figure 3.5: Spectral response of the A1312IE monochrome image sensor (NIR enhanced) in the MV1­D1312IE and DR1-D1312IE camera series
.
3.3 Technical Specification 29
3 Product Specification
Fig. 3.6 shows the quantum efficiency and Fig. 3.7 the responsivity of the A1312C CMOS sensor used in the colour cameras, displayed as a function of wavelength. For more information on photometric and radiometric measurements see the Photonfocus application notes AN006 and AN008 available in the support area of our website www.photonfocus.com.
0 %
5 %
1 0 %
1 5 %
2 0 %
2 5 %
3 0 %
3 5 %
4 0 %
4 5 %
3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0
W a v e l e n g t h [ n m ]
Q u a n t u m E f f i c i e n c y
Q E ( r e d )
Q E ( g r e e n 1 )
Q E ( g r e e n 2 )
Q E ( b l u e )
Figure 3.6: Quantum efficiency of the A1312C CMOS image sensor in the MV1-D1312C colour camera series
0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0
W a v e l e n g t h [ n m ]
R e s p o n s i v i t y [ V / J / m ² ]
R e s p o n s i v i t y ( r e d )
R e s p o n s i v i t y ( g r e e n 1 )
R e s p o n s i v i t y ( g r e e n 2 )
R e s p o n s i v i t y ( b l u e )
Figure 3.7: Responsivity of the A1312C CMOS image sensor in the MV1-D1312C colour camera series
.
30
The A1312C colour sensor is equipped with a cover glass. It incorporates an infra-red cut-off filter to avoid false colours arising when an infra-red component is present in the illumination. Fig. 3.8 shows the transmssion curve of the cut-off filter.
Figure 3.8: Transmission curve of the cut-off filter in the MV1-D1312C colour camera series
.
3.3 Technical Specification 31
3 Product Specification
3.4 RGB Bayer Pattern Filter (colour models only)
Fig. 3.9 shows the bayer filter arrangement on the pixel matrix in the MV1-D1312C camera series. The numbers in the figure represents pixel position x, pixel position y.
The fix bayer pattern arrangement has to be considered when the ROI configu­ration is changed or the MROI feature is used (see 4.3). It depends on the line number in which an ROI starts. An ROI can start at an even or an odd line num­ber.
Figure 3.9: Bayer Pattern Arrangement in the MV1-D1312C camera series
32
4
Functionality
This chapter serves as an overview of the camera configuration modes and explains camera features. The goal is to describe what can be done with the camera. The setup of the MV1-D1312(IE/C) series cameras is explained in later chapters.
If not otherwise specified, the DR1-D1312(IE) camera series has the same func­tionality as the MV1-D1312(IE/C) camera series. In most cases only the MV1­D1312(IE/C) cameras are mentionend in the text.
4.1 Image Acquisition
4.1.1 Readout Modes
The MV1-D1312 CMOS cameras provide two different readout modes:
Sequential readout Frame time is the sum of exposure time and readout time. Exposure time
of the next image can only start if the readout time of the current image is finished.
Simultaneous readout (interleave) The frame time is determined by the maximum of the
exposure time or of the readout time, which ever of both is the longer one. Exposure time of the next image can start during the readout time of the current image.
Readout Mode MV1-D1312 Series
Sequential readout available
Simultaneous readout available
Table 4.1: Readout mode of MV1-D1312 Series camera
The following figure illustrates the effect on the frame rate when using either the sequential readout mode or the simultaneous readout mode (interleave exposure).
Sequential readout mode For the calculation of the frame rate only a single formula applies:
frames per second equal to the inverse of the sum of exposure time and readout time.
Simultaneous readout mode (exposure time < readout time) The frame rate is given by the
readout time. Frames per second equal to the inverse of the readout time.
Simultaneous readout mode (exposure time > readout time) The frame rate is given by the
exposure time. Frames per second equal to the inverse of the exposure time.
The simultaneous readout mode allows higher frame rates. However, if the exposure time greatly exceeds the readout time, then the effect on the frame rate is neglectable.
In simultaneous readout mode image output faces minor limitations. The overall linear sensor reponse is partially restricted in the lower grey scale region.
33
4 Functionality
E x p o s u r e t i m e
F r a m e r a t e ( f p s )
S i m u l t a n e o u s r e a d o u t m o d e
S e q u e n t i a l r e a d o u t m o d e
f p s = 1 / r e a d o u t t i m e
f p s = 1 / e x p o s u r e t i m e
f p s = 1 / ( r e a d o u t t i m e + e x p o s u r e t i m e )
e x p o s u r e t i m e < r e a d o u t t i m e
e x p o s u r e t i m e > r e a d o u t t i m e
e x p o s u r e t i m e = r e a d o u t t i m e
Figure 4.1: Frame rate in sequential readout mode and simultaneous readout mode
When changing readout mode from sequential to simultaneous readout mode or vice versa, new settings of the BlackLevelOffset and of the image correction are required.
Sequential readout
By default the camera continuously delivers images as fast as possible ("Free-running mode") in the sequential readout mode. Exposure time of the next image can only start if the readout time of the current image is finished.
e x p o s u r e r e a d o u t
e x p o s u r e
r e a d o u t
Figure 4.2: Timing in free-running sequential readout mode
When the acquisition of an image needs to be synchronised to an external event, an external trigger can be used (refer to Section 4.4). In this mode, the camera is idle until it gets a signal to capture an image.
e x p o s u r e r e a d o u t
i d l e e x p o s u r e
e x t e r n a l t r i g g e r
Figure 4.3: Timing in triggered sequential readout mode
Simultaneous readout (interleave exposure)
To achieve highest possible frame rates, the camera must be set to "Free-running mode" with simultaneous readout. The camera continuously delivers images as fast as possible. Exposure time of the next image can start during the readout time of the current image.
34
e x p o s u r e n i d l e
i d l e
r e a d o u t n
e x p o s u r e n + 1
r e a d o u t n + 1
f r a m e t i m e
r e a d o u t n - 1
Figure 4.4: Timing in free-running simultaneous readout mode (readout time> exposure time)
e x p o s u r e n
i d l e
r e a d o u t n
e x p o s u r e n + 1
f r a m e t i m e
r e a d o u t n - 1
i d l e
e x p o s u r e n - 1
Figure 4.5: Timing in free-running simultaneous readout mode (readout time< exposure time)
When the acquisition of an image needs to be synchronised to an external event, an external trigger can be used (refer to Section 4.4). In this mode, the camera is idle until it gets a signal to capture an image.
Figure 4.6: Timing in triggered simultaneous readout mode
4.1.2 Readout Timing
Sequential readout timing
By default, the camera is in free running mode and delivers images without any external control signals. The sensor is operated in sequential readout mode, which means that the sensor is read out after the exposure time. Then the sensor is reset, a new exposure starts and the readout of the image information begins again. The data is output on the rising edge of the pixel clock. The signals FRAME_VALID (FVAL) and LINE_VALID (LVAL) mask valid image information. The signal SHUTTER indicates the active exposure period of the sensor and is shown for clarity only.
Simultaneous readout timing
To achieve highest possible frame rates, the camera must be set to "Free-running mode" with simultaneous readout. The camera continuously delivers images as fast as possible. Exposure time of the next image can start during the readout time of the current image. The data is output on the rising edge of the pixel clock. The signals FRAME_VALID (FVAL) and LINE_VALID (LVAL)
4.1 Image Acquisition 35
4 Functionality
P C L K
S H U T T E R
F V A L
L V A L
D V A L
D A T A
L i n e p a u s e
L i n e p a u s e L i n e p a u s e
F i r s t L i n e L a s t L i n e
E x p o s u r e T i m e
F r a m e T i m e
C P R E
Figure 4.7: Timing diagram of sequential readout mode
mask valid image information. The signal SHUTTER indicates the active integration phase of the sensor and is shown for clarity only.
36
P C L K
S H U T T E R
F V A L
L V A L
D V A L
D A T A
L i n e p a u s e
L i n e p a u s e L i n e p a u s e
F i r s t L i n e L a s t L i n e
E x p o s u r e T i m e
F r a m e T i m e
C P R E
E x p o s u r e T i m e
C P R E
Figure 4.8: Timing diagram of simultaneous readout mode (readout time > exposure time)
P C L K
S H U T T E R
F V A L
L V A L
D V A L
D A T A
L i n e p a u s e
L i n e p a u s e L i n e p a u s e
F i r s t L i n e L a s t L i n e
F r a m e T i m e
C P R E
E x p o s u r e T i m e
C P R E
Figure 4.9: Timing diagram simultaneous readout mode (readout time < exposure time)
4.1 Image Acquisition 37
4 Functionality
Frame time Frame time is the inverse of the frame rate.
Exposure time Period during which the pixels are integrating the incoming light.
PCLK Pixel clock on internal camera interface.
SHUTTER Internal signal, shown only for clarity. Is ’high’ during the exposure
time.
FVAL (Frame Valid) Is ’high’ while the data of one complete frame are transferred.
LVAL (Line Valid) Is ’high’ while the data of one line are transferred. Example: To transfer
an image with 640x480 pixels, there are 480 LVAL within one FVAL active high period. One LVAL lasts 640 pixel clock cycles.
DVAL (Data Valid) Is ’high’ while data are valid.
DATA Transferred pixel values. Example: For a 100x100 pixel image, there are
100 values transferred within one LVAL active high period, or 100*100 values within one FVAL period.
Line pause Delay before the first line and after every following line when reading
out the image data.
Table 4.2: Explanation of control and data signals used in the timing diagram
These terms will be used also in the timing diagrams of Section 4.4.
4.1.3 Exposure Control
The exposure time defines the period during which the image sensor integrates the incoming light. Refer to Section 3.3 for the allowed exposure time range.
4.1.4 Maximum Frame Rate
The maximum frame rate depends on the exposure time and the size of the image (see Section
4.3.)
The maximal frame rate with current camera settings can be read out from the property FrameRateMax (AcquisitionFrameRateMax in GigE cameras).
.
38
4.2 Pixel Response
4.2.1 Linear Response
The camera offers a linear response between input light signal and output grey level. This can be modified by the use of LinLog®as described in the following sections. In addition, a linear digital gain may be applied, as follows. Please see Table 3.2 for more model-dependent information.
Black Level Adjustment
The black level is the average image value at no light intensity. It can be adjusted by the software. Thus, the overall image gets brighter or darker. Use a histogram to control the settings of the black level.
In CameraLink®cameras the black level is called "BlackLevelOffset" and in GigE cameras "BlackLevel".
4.2.2 LinLog
®
Overview
The LinLog®technology from Photonfocus allows a logarithmic compression of high light intensities inside the pixel. In contrast to the classical non-integrating logarithmic pixel, the LinLog®pixel is an integrating pixel with global shutter and the possibility to control the transition between linear and logarithmic mode.
In situations involving high intrascene contrast, a compression of the upper grey level region can be achieved with the LinLog®technology. At low intensities each pixel shows a linear response. At high intensities the response changes to logarithmic compression (see Fig. 4.10). The transition region between linear and logarithmic response can be smoothly adjusted by software and is continuously differentiable and monotonic.
LinLog®is controlled by up to 4 parameters (Time1, Time2, Value1 and Value2). Value1 and Value2 correspond to the LinLog®voltage that is applied to the sensor. The higher the parameters Value1 and Value2 respectively, the stronger the compression for the high light intensities. Time1 and Time2 are normalised to the exposure time. They can be set to a maximum value of 1000, which corresponds to the exposure time.
Examples in the following sections illustrate the LinLog®feature.
LinLog1
In the simplest way the pixels are operated with a constant LinLog®voltage which defines the knee point of the transition.This procedure has the drawback that the linear response curve changes directly to a logarithmic curve leading to a poor grey resolution in the logarithmic region (see Fig. 4.12).
.
4.2 Pixel Response 39
4 Functionality
G r e y V a l u e
L i g h t I n t e n s i t y
0 %
1 0 0 %
L i n e a r R e s p o n s e
S a t u r a t i o n
W e a k c o m p r e s s i o n
V a l u e 2
S t r o n g c o m p r e s s i o n
V a l u e 1
R e s u l t i n g L i n l o g R e s p o n s e
Figure 4.10: Resulting LinLog2 response curve
tt
V a l u e 1
t
e x p
0
V
L i n L o g
= V a l u e 2
T i m e 1 = T i m e 2 = m a x . = 1 0 0 0
Figure 4.11: Constant LinLog voltage in the Linlog1 mode
LinLog2
To get more grey resolution in the LinLog®mode, the LinLog2 procedure was developed. In LinLog2 mode a switching between two different logarithmic compressions occurs during the exposure time (see Fig. 4.13). The exposure starts with strong compression with a high LinLog®voltage (Value1). At Time1 the LinLog®voltage is switched to a lower voltage resulting in a weaker compression. This procedure gives a LinLog®response curve with more grey resolution. Fig. 4.14 and Fig. 4.15 show how the response curve is controlled by the three parameters Value1, Value2 and the LinLog®time Time1.
Settings in LinLog2 mode, enable a fine tuning of the slope in the logarithmic region.
LinLog3
To enable more flexibility the LinLog3 mode with 4 parameters was introduced. Fig. 4.16 shows the timing diagram for the LinLog3 mode and the control parameters.
.
40
0
50
100
150
200
250
300
Typical LinLog1 Response Curve − Varying Parameter Value1
Illumination Intensity
Output grey level (8 bit) [DN]
V1 = 15 V1 = 16
V1 = 17 V1 = 18 V1 = 19
Time1=1000, Time2=1000, Value2=Value1
Figure 4.12: Response curve for different LinLog settings in LinLog1 mode
tt
V a l u e 1
V a l u e 2
T i m e 1
t
e x p
0
V
L i n L o g
T i m e 2 = m a x . = 1 0 0 0
T i m e 1
Figure 4.13: Voltage switching in the Linlog2 mode
4.2 Pixel Response 41
4 Functionality
0
50
100
150
200
250
300
Typical LinLog2 Response Curve − Varying Parameter Time1
Illumination Intensity
Output grey level (8 bit) [DN]
T1 = 840 T1 = 920 T1 = 960
T1 = 980 T1 = 999
Time2=1000, Value1=19, Value2=14
Figure 4.14: Response curve for different LinLog settings in LinLog2 mode
0
20
40
60
80
100
120
140
160
180
200
Typical LinLog2 Response Curve − Varying Parameter Time1
Illumination Intensity
Output grey level (8 bit) [DN]
T1 = 880 T1 = 900 T1 = 920 T1 = 940 T1 = 960 T1 = 980 T1 = 1000
Time2=1000, Value1=19, Value2=18
Figure 4.15: Response curve for different LinLog settings in LinLog2 mode
42
V
L i n L o g
t
V a l u e 1
V a l u e 2
t
e x p
T i m e 2
T i m e 1
T i m e 1 T i m e 2
t
e x p
V a l u e 3 = C o n s t a n t = 0
Figure 4.16: Voltage switching in the LinLog3 mode
0
50
100
150
200
250
300
Typical LinLog2 Response Curve − Varying Parameter Time2
Illumination Intensity
Output grey level (8 bit) [DN]
T2 = 950 T2 = 960 T2 = 970 T2 = 980
T2 = 990
Time1=850, Value1=19, Value2=18
Figure 4.17: Response curve for different LinLog settings in LinLog3 mode
4.2 Pixel Response 43
4 Functionality
4.3 Reduction of Image Size
With Photonfocus cameras there are several possibilities to focus on the interesting parts of an image, thus reducing the data rate and increasing the frame rate. The most commonly used feature is Region of Interest (ROI).
4.3.1 Region of Interest (ROI)
Some applications do not need full image resolution (e.g. 1312 x 1082 pixels). By reducing the image size to a certain region of interest (ROI), the frame rate can be increased. A region of interest can be almost any rectangular window and is specified by its position within the full frame and its width (W) and height (H). Fig. 4.18, Fig. 4.19 and Fig. 4.20 show possible configurations for the region of interest, and Table 4.3 presents numerical examples of how the frame rate can be increased by reducing the ROI.
Both reductions in x- and y-direction result in a higher frame rate.
The minimum width of the region of interest depends on the camera model. For more details please consult Table 4.5 and Table 4.7.
The minimum width must be positioned symmetrically towards the vertical cen­ter line of the sensor as shown in Fig. 4.18, Fig. 4.19 and Fig. 4.20). A list of possible settings of the ROI for each camera model is given in Table 4.7.
Colour models only: the vertical start position and height of every ROI should be an even number number to have the correct Bayer pattern in the output image (see also Section 3.4).
It is recommended to re-adjust the settings of the shading correction each time a new region of interest is selected.
44
³ 1 4 4
P i x e l
³ 1 4 4
P i x e l
³ 1 4 4
P i x e l
+ m o d u l o 3 2 P i x e l
³ 1 4 4
P i x e l + m o d u l o 3 2 P i x e l
a )
b )
Figure 4.18: Possible configuration of the region of interest for the MV1-D1312(IE/C)-40 CMOS camera
³ 2 0 8
P i x e l
³
2 0 8 P i x e l
³ 2 0 8
P i x e l
+ m o d u l o 3 2 P i x e l
³ 2 0 8
P i x e l + m o d u l o 3 2 P i x e l
a )
b )
Figure 4.19: Possible configuration of the region of interest with MV1-D1312(IE/C)-80 CMOS camera
Any region of interest may NOT be placed outside of the center of the sensor. Examples shown in Fig. 4.21 illustrate configurations of the ROI that are NOT allowed.
.
4.3 Reduction of Image Size 45
4 Functionality
³
2 7 2 p i x e l
³
2 7 2 p i x e l
³
2 7 2 p i x e l
+ m o d u l o 3 2 p i x e l
³
2 7 2 p i x e l + m o d u l o 3 2 p i x e l
a )
b )
Figure 4.20: Possible configuration of the region of interest with MV1-D1312(IE/C)-100 and DR1-D1312(IE)­200 CMOS cameras
ROI Dimension [Standard] MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80
1312 x 1082 (full resolution) 27 fps 54 fps
minimum resolution 10224 fps (288 x 1) 10672 fps (416 x 1)
1280 x 1024 (SXGA) 29 fps 58 fps
1280 x 768 (WXGA) 39 fps 78 fps
800 x 600 (SVGA) 78 fps 157 fps
640 x 480 (VGA) 121 fps 241 fps
544 x 1 9696 fps 10493 fps
544 x 1082 62 fps 125 fps
1312 x 544 53 fps 107 fps
1312 x 256 113 fps 227 fps
544 x 544 124 fps 248 fps
1024 x 1024 36 fps 72 fps
1312 x 1 8103 fps 9532 fps
Table 4.3: Frame rates of different ROI settings (exposure time 10 µs; correction on, and sequential readout mode).
46
ROI Dimension [Standard] MV1-D1312(IE/C)-100 DR1-D1312(IE)-200
1)
1312 x 1082 (full resolution) 67 fps 135 fps
minimum resolution 10690 fps (544 x 1) 10766 fps (544 x 2)
1280 x 1024 (SXGA) 73 fps 146 fps
1280 x 768 (WXGA) 97 fps 194 fps
800 x 600 (SVGA) 195 fps 385 fps
640 x 480 (VGA) 300 fps 584 fps
544 x 1 10690 fps not allowed ROI setting
544 x 2 10066 fps 10766 fps
544 x 1082 157 fps 310 fps
1312 x 544 134 fps 266 fps
1312 x 256 282 fps 551 fps
544 x 544 308 fps 600 fps
1024 x 1024 90 fps 181 fps
1312 x 1 9879 fps not allowed ROI setting
Table 4.4: Frame rates of different ROI settings (exposure time 10 µs; correction on, and sequential readout mode). (Footnotes:1)double rate mode enabled).
a )
b )
Figure 4.21: ROI configuration examples that are NOT allowed
4.3 Reduction of Image Size 47
4 Functionality
4.3.2 ROI configuration
In the MV1-D1312(IE/C) camera series the following restrictions have to be respected for the ROI configuration:
The minimum width (w) of the ROI is camera model dependent, consisting of 288 pixel in the MV1-D1312(IE/C)-40 camera, of 416 pixel in the MV1-D1312(IE/C)-80 camera and of 544 pixel in the MV1-D1312(IE/C)-100 camera.
The region of interest must overlap a minimum number of pixels centered to the left and to the right of the vertical middle line of the sensor (ovl).
DR1-D1312(IE) cameras only: the height must be an even number.
For any camera model of the MV1-D1312(IE/C) camera series the allowed ranges for the ROI settings can be deduced by the following formula:
x
min
= max(0, 656 + ovl w)
x
max
= min(656 ovl, 1312 w) .
where "ovl" is the overlap over the middle line and "w" is the width of the region of interest.
Any ROI settings in x-direction exceeding the minimum ROI width must be mod­ulo 32.
MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80
ROI width (w) 288 ... 1312 416 ... 1312
overlap (ovl) 144 208
width condition modulo 32 modulo 32
height condition 1 ... 1082 1 ... 1082
Table 4.5: Summary of the ROI configuration restrictions for the MV1-D1312(IE/C)-40 and MV1-D1312(IE/C)­80 cameras indicating the minimum ROI width (w) and the required number of pixel overlap (ovl) over the sensor middle line.
MV1-D1312(IE/C)-100 DR1-D1312(IE)-200
ROI width (w) 544 ... 1312 544 ... 1312
overlap (ovl) 272 272
width condition modulo 32 modulo 32
height condition 1 ... 1082 2 ... 1082, modulo 2
Table 4.6: Summary of the ROI configuration restrictions for the MV1-D1312(IE/C)-100 and DR1-D1312(IE)­200 cameras indicating the minimum ROI width (w) and the required number of pixel overlap (ovl) over the sensor middle line.
The settings of the region of interest in x-direction are restricted to modulo 32 (see Table 4.7).
48
There are no restrictions for the settings of the region of interest in y-direction in the MV1-D1312(IE/C) camera series. The ROI settings in y-direction for the DR1-D1312(IE)-200 camera is restricted to modulo 2.
Width ROI-X (MV1-D1312(IE/C)-40) ROI-X (MV1-D1312(IE/C)-80) ROI-X (-1002), -2003))
288 512 not available not available
320 480 ... 512 not available not available
352 448 ... 512 not available not available
384 416 ... 512 not available not available
416 384 ... 512 448 not available
448 352 ... 512 416 ... 448 not available
480 320 ... 520 384 ... 448 not available
512 288 ... 512 352 ... 448 not available
544 256 ...512 320 ... 448 384
576 224 ... 512 288 ... 448 352 ... 384
608 192 ... 512 256 ... 448 320 ... 352
640 160 ... 512 224 ... 448 288 ... 384
672 128 ... 512 192 ... 448 256 ... 384
704 96 ... 512 160 ... 448 224 ... 384
736 64 ... 512 128 ... 448 192 ... 384
768 32 ... 512 96 ... 448 160 ... 384
800 0 ... 512 64 ... 448 128 ... 384
832 0 ... 480 32 ... 448 96 ... 384
864 0 ... 448 0 ... 448 64 ... 384
896 0 ... 416 0 ... 416 32 ... 384
... ... ... ...
1312 0 0 0
Table 4.7: Some possible ROI-X settings (Footnotes:2)MV1-D1312(IE/C)-100,3)DR1-D1312(IE)-200)
.
4.3 Reduction of Image Size 49
4 Functionality
4.3.3 Calculation of the maximum frame rate
The frame rate mainly depends on the exposure time and readout time. The frame rate is the inverse of the frame time.
The maximal frame rate with current camera settings can be read out from the property FrameRateMax.
fps =
1
t
frame
Calculation of the frame time (sequential mode)
t
frame
t
exp
+ t
ro
Typical values of the readout time troare given in table Table 4.8 and Table 4.9. Calculation of the frame time (simultaneous mode)
The calculation of the frame time in simultaneous read out mode requires more detailed data input and is skipped here for the purpose of clarity.
ROI Dimension MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80 MV1-D1312(IE/C)-100
1312 x 1082 tro= 36.46 ms tro= 18.23 ms tro= 14.59 ms
1024 x 512 tro= 13.57 ms tro= 6.78 ms tro= 5.43 ms
1024 x 256 tro= 6.78 ms tro= 3.39 ms tro= 2.73 ms
Table 4.8: Read out time at different ROI settings for the MV1-D1312(IE/C) CMOS camera series in sequen­tial read out mode. (Footnotes:1)double rate mode enabled).
ROI Dimension DR1-D1312(IE)-200
1312 x 1082 tro= 7.30 ms
1024 x 512 tro= 2.72 ms
1024 x 256 tro= 1.36 ms
Table 4.9: Read out time at different ROI settings for the DR1-D1312(IE) CMOS camera series in sequential read out mode, double rate mode enabled.
A frame rate calculator for calculating the maximum frame rate is available in the support area of the Photonfocus website.
An overview of resulting frame rates in different exposure time settings is given in table Table
4.10 and Table 4.11.
50
Exposure time MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-80 MV1-D1312(IE/C)-100
10 µs 27 / 27 fps 54 / 54 fps 67 / 67 fps
100 µs 27 / 27 fps 54 / 54 fps 67 / 67 fps
500 µs 27 / 27 fps 53 / 54 fps 65 / 67 fps
1 ms 27 / 27 fps 51 / 54 fps 63 / 67 fps
2 ms 26 / 27 fps 49 / 54 fps 60 / 67 fps
5 ms 24 / 27 fps 42 / 54 fps 50 / 67 fps
10 ms 22 / 27 fps 35 / 54 fps 40 / 67 fps
12 ms 21 / 27 fps 33 / 54 fps 37 / 67 fps
Table 4.10: Frame rates of different exposure times, [sequential readout mode / simultaneous readout mode], resolution 1312 x 1082 pixel (correction on).
Exposure time DR1-D1312(IE)-200
10 µs 135 / 135 fps
100 µs 133 / 135 fps
500 µs 127 / 135 fps
1 ms 119 / 135 fps
2 ms 106 / 134 fps
5 ms 80 / 135 fps
10 ms 57 / 99 fps
12 ms 51 / 82 fps
Table 4.11: Frame rates of different exposure times, [sequential readout mode / simultaneous readout mode], resolution 1312 x 1082 pixel (correction on), double rate mode enabled.
4.3 Reduction of Image Size 51
4 Functionality
4.3.4 Multiple Regions of Interest
The MV1-D1312(IE/C) camera series can handle up to 512 different regions of interest. This feature can be used to reduce the image data and increase the frame rate. An application example for using multiple regions of interest (MROI) is a laser triangulation system with several laser lines. The multiple ROIs are joined together and form a single image, which is transferred to the frame grabber.
An individual MROI region is defined by its starting value in y-direction and its height. The starting value in horizontal direction and the width is the same for all MROI regions and is defined by the ROI settings. The maximum frame rate in MROI mode depends on the number of rows and columns being read out. Overlapping ROIs are allowed. See Section 4.3.3 for information on the calculation of the maximum frame rate.
Fig. 4.22 compares ROI and MROI: the setups (visualized on the image sensor area) are displayed in the upper half of the drawing. The lower half shows the dimensions of the resulting image. On the left-hand side an example of ROI is shown and on the right-hand side an example of MROI. It can be readily seen that resulting image with MROI is smaller than the resulting image with ROI only and the former will result in an increase in image frame rate.
ROI and MROI not only increase the frame rate but also the amount of data to be processed is reduced. This increases the performance of your image procesing system.
M R O I 0
M R O I 1
M R O I 2
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
R O I
M R O I 0
M R O I 1
M R O I 2
R O I
Figure 4.22: Multiple Regions of Interest
Fig. 4.23 shows another MROI drawing illustrating the effect of MROI on the image content.
52
Figure 4.23: Multiple Regions of Interest with 5 ROIs
Fig. 4.24 shows an example from hyperspectral imaging where the presence of spectral lines at known regions need to be inspected. By using MROI only a 656x54 region needs to be readout and a frame rate of 4300 fps can be achieved. Without using MROI the resulting frame rate would be 216 fps for a 656x1082 ROI.
6 3 6 p i x e l
( 0 , 0 )
( x
m a x
, y
m a x
)
2 0 p i x e l
2 6 p i x e l
2 p i x e l
2 p i x e l
2 p i x e l
1 p i x e l
1 p i x e l
C h e m i c a l A g e n t
A
B C
Figure 4.24: Multiple Regions of Interest in hyperspectral imaging
4.3 Reduction of Image Size 53
4 Functionality
4.3.5 Decimation (monochrome models only)
Decimation reduces the number of pixels in y-direction. Decimation can also be used together with ROI or MROI. Decimation in y-direction transfers every nthrow only and directly results in reduced read-out time and higher frame rate respectively.
Fig. 4.25 shows decimation on the full image. The rows that will be read out are marked by red lines. Row 0 is read out and then every nthrow.
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
Figure 4.25: Decimation in full image
Fig. 4.26 shows decimation on a ROI. The row specified by the Window.Y setting is first read out and then every nthrow until the end of the ROI.
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
R O I
Figure 4.26: Decimation and ROI
Fig. 4.27 shows decimation and MROI. For every MROI region m, the first row read out is the row specified by the MROI<m>.Y setting and then every nthrow until the end of MROI region m.
54
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
M R O I 0
R O I
M R O I 2
M R O I 1
Figure 4.27: Decimation and MROI
The image in Fig. 4.28 on the right-hand side shows the result of decimation 3 of the image on the left-hand side.
Figure 4.28: Image example of decimation 3
An example of a high-speed measurement of the elongation of an injection needle is given in Fig. 4.29. In this application the height information is less important than the width information. Applying decimation 2 on the original image on the left-hand side doubles the resulting frame to about 7800 fps.
4.3 Reduction of Image Size 55
4 Functionality
Figure 4.29: Example of decimation 2 on image of injection needle
4.4 Trigger and Strobe
4.4.1 Introduction
The start of the exposure of the camera’s image sensor is controlled by the trigger. The trigger can either be generated internally by the camera (free running trigger mode) or by an external device (external trigger mode).
This section refers to the external trigger mode if not otherwise specified.
In external trigger mode (TriggerMode=On), the trigger is applied according to the value of the TriggerSource property (see Section 4.4.2). The trigger signal can be configured to be active high or active low (property TriggerActivation). When the frequency of the incoming triggers is higher than the maximal frame rate of the current camera settings, then some trigger pulses will be missed. A missed trigger counter counts these events. This counter can be read out by the user. The input and output signals of the power connector are connected to the Programmable Logic Controller (PLC) which allows powerful operations of the input and output signals (see Section 5.6).
A suitable trigger breakout cable for the Hirose 12 pol. connector can be ordered from your Photonfocus dealership.
The exposure time in external trigger mode can be defined by the setting of the exposure time register (camera controlled exposure mode) or by the width of the incoming trigger pulse (trigger controlled exposure mode) (see Section 4.4.4).
An external trigger pulse starts the exposure of one image. In Burst Trigger Mode however, a trigger pulse starts the exposure of a user defined number of images (see Section 4.4.6).
The start of the exposure is shortly after the active edge of the incoming trigger. An additional trigger delay can be applied that delays the start of the exposure by a user defined time (see Section 4.4.5). This is often used to start the exposure after the trigger to a flash lighting source.
4.4.2 Trigger Source
The trigger signal can be configured to be active high or active low by the TriggerActivation (category AcquisitionControl) property. One of the following trigger sources can be used:
Free running The trigger is generated internally by the camera. Exposure starts immediately
after the camera is ready and the maximal possible frame rate is attained, if
AcquisitionFrameRateEnable is disabled. Settings for free running trigger mode: TriggerMode = Off. In Constant Frame Rate mode (AcquisitionFrameRateEnable = True),
exposure starts after a user-specified time has elapsed from the previous exposure start so that the resulting frame rate is equal to the value of AcquisitionFrameRate.
56
Software Trigger The trigger signal is applied through a software command (TriggerSoftware
in category AcquisitionControl). Settings for Software Trigger mode: TriggerMode = On and TriggerSource = Software.
Line1 Trigger The trigger signal is applied directly to the camera by the power supply
connector through pin ISO_IN1 (see also Section A.1). A setup of this mode is shown in Fig. 4.31 and Fig. 4.32. The electrical interface of the trigger input and the strobe output is described in Section 5.5. Settings for Line1 Trigger mode: TriggerMode = On and TriggerSource = Line1.
PLC_Q4 Trigger The trigger signal is applied by the Q4 output of the PLC (see also Section 5.6).
Settings for PLC_Q4 Trigger mode: TriggerMode = On and TriggerSource = PLC_Q4.
Some trigger signals are inverted. A schematic drawing is shown in Fig. 4.30.
I S O _ I N 0
I S O _ I N 1
P L C
L i n e 0
L i n e 1
P L C _ Q 1
P L C _ Q 4
I S O _ O U T 1
L i n e 1
S o f t w a r e T r i g g e r
C a m e r a T r i g g e r
P L C _ Q 4
T r i g g e r S o u r c e
Figure 4.30: Trigger source schematic
.
4.4 Trigger and Strobe 57
4 Functionality
Figure 4.31: Trigger source
Figure 4.32: Trigger Inputs - Multiple GigE solution
58
4.4.3 Trigger and AcquisitionMode
The relationship between AcquisitionMode and TriggerMode is shown in Table 4.12. When TriggerMode=Off, then the frame rate depends on the AcquisitionFrameRateEnable property (see also under Free running in Section 4.4.2).
The ContinuousRecording and ContinousReadout modes can be used if more than one camera is connected to the same network and need to shoot images simul­taneously. If all camera are set to Continous mode, then all will send the packets at same time resulting in network congestion. A better way would be to set the cameras in ContinuousRecording mode and save the images in the memory of the IPEngine. The images can then be claimed with ContinousReadout from one camera at a time avoid network collisions and congestion.
.
4.4 Trigger and Strobe 59
4 Functionality
AcquisitionMode TriggerMode After the command AcquisitionStart is executed:
Continuous Off Camera is in free-running mode. Acquisition can be
stopped by executing AcquisitionStop command.
Continuous On Camera is ready to accept triggers according to the
TriggerSource property. Acquisition and trigger acceptance can be stopped by executing AcquisitionStop command.
SingleFrame Off Camera acquires one frame and acquisition stops.
SingleFrame On Camera is ready to accept one trigger according to
the TriggerSource property. Acquisition and trigger acceptance is stopped after one trigger has been accepted.
MultiFrame Off Camera acquires n=AcquisitionFrameCount frames
and acquisition stops.
MultiFrame On Camera is ready to accept n=AcquisitionFrameCount
triggers according to the TriggerSource property. Acquisition and trigger acceptance is stopped after n triggers have been accepted.
SingleFrameRecording Off Camera saves one image on the onboard memory
of the IP engine.
SingleFrameRecording On Camera is ready to accept one trigger according to
the TriggerSource property. Trigger acceptance is stopped after one trigger has been accepted and image is saved on the onboard memory of the IP engine.
SingleFrameReadout don’t care One image is acquired from the IP engine’s
onboard memory. The image must have been saved in the SingleFrameRecording mode.
ContinuousRecording Off Camera saves images on the onboard memory of
the IP engine until the memory is full.
ContinuousRecording On Camera is ready to accept triggers according to the
TriggerSource property. Images are saved on the onboard memory of the IP engine until the memory is full. 18 images can be saved at full resolution (1312x1082) in 8 bit mono mode.
ContinousReadout don’t care All Images that have been previously saved by the
ContinuousRecording mode are acquired from the IP engine’s onboard memory.
Table 4.12: AcquisitionMode and Trigger
60
4.4.4 Exposure Time Control
Depending on the trigger mode, the exposure time can be determined either by the camera or by the trigger signal itself:
Camera-controlled Exposure time In this trigger mode the exposure time is defined by the
camera. For an active high trigger signal, the camera starts the exposure with a positive trigger edge and stops it when the preprogrammed exposure time has elapsed. The exposure time is defined by the software.
Trigger-controlled Exposure time In this trigger mode the exposure time is defined by the
pulse width of the trigger pulse. For an active high trigger signal, the camera starts the exposure with the positive edge of the trigger signal and stops it with the negative edge.
Trigger-controlled exposure time is not available in simultaneous readout mode.
External Trigger with Camera controlled Exposure Time
In the external trigger mode with camera controlled exposure time the rising edge of the trigger pulse starts the camera states machine, which controls the sensor and optional an external strobe output. Fig. 4.33 shows the detailed timing diagram for the external trigger mode with camera controlled exposure time.
e x t e r n a l t r i g g e r p u l s e i n p u t
t r i g g e r a f t e r i s o l a t o r
t r i g g e r p u l s e i n t e r n a l c a m e r a c o n t r o l
d e l a y e d t r i g g e r f o r s h u t t e r c o n t r o l
i n t e r n a l s h u t t e r c o n t r o l
d e l a y e d t r i g g e r f o r s t r o b e c o n t r o l
i n t e r n a l s t r o b e c o n t r o l
e x t e r n a l s t r o b e p u l s e o u t p u t
t
d - i s o - i n p u t
t
j i t t e r
t
t r i g g e r - d e l a y
t
e x p o s u r e
t
s t r o b e - d e l a y
t
d - i s o - o u t p u t
t
s t r o b e - d u r a t i o n
t
t r i g g e r - o f f s e t
t
s t r o b e - o f f s e t
Figure 4.33: Timing diagram for the camera controlled exposure time
The rising edge of the trigger signal is detected in the camera control electronic which is implemented in an FPGA. Before the trigger signal reaches the FPGA it is isolated from the
4.4 Trigger and Strobe 61
4 Functionality
camera environment to allow robust integration of the camera into the vision system. In the signal isolator the trigger signal is delayed by time t
disoinput
. This signal is clocked into the
FPGA which leads to a jitter of t
jitter
. The pulse can be delayed by the time t
triggerdelay
which can be configured by a user defined value via camera software. The trigger offset delay t
triggeroffset
results then from the synchronous design of the FPGA state machines. The
exposure time t
exposure
is controlled with an internal exposure time controller.
The trigger pulse from the internal camera control starts also the strobe control state machines. The strobe can be delayed by t
strobedelay
with an internal counter which can be controlled by
the customer via software settings. The strobe offset delay t
strobedelay
results then from the synchronous design of the FPGA state machines. A second counter determines the strobe duration t
strobeduration
(strobe-duration). For a robust system design the strobe output is also
isolated from the camera electronic which leads to an additional delay of t
disooutput
. Section
4.4.6 gives an overview over the minimum and maximum values of the parameters.
External Trigger with Pulsewidth controlled Exposure Time
In the external trigger mode with Pulsewidth controlled exposure time the rising edge of the trigger pulse starts the camera states machine, which controls the sensor. The falling edge of the trigger pulse stops the image acquisition. Additionally the optional external strobe output is controlled by the rising edge of the trigger pulse. Timing diagram Fig. 4.34 shows the detailed timing for the external trigger mode with pulse width controlled exposure time.
e x t e r n a l t r i g g e r p u l s e i n p u t
t r i g g e r a f t e r i s o l a t o r
t r i g g e r p u l s e r i s i n g e d g e c a m e r a c o n t r o l
d e l a y e d t r i g g e r r i s i n g e d g e f o r s h u t t e r s e t
i n t e r n a l s h u t t e r c o n t r o l
d e l a y e d t r i g g e r f o r s t r o b e c o n t r o l
i n t e r n a l s t r o b e c o n t r o l
e x t e r n a l s t r o b e p u l s e o u t p u t
t
d - i s o - i n p u t
t
j i t t e r
t
t r i g g e r - d e l a y
t
e x p o s u r e
t
s t r o b e - d e l a y
t
d - i s o - o u t p u t
t
s t r o b e - d u r a t i o n
t r i g g e r p u l s e f a l l i n g e d g e c a m e r a c o n t r o l
d e l a y e d t r i g g e r f a l l i n g e d g e s h u t t e r r e s e t
t
j i t t e r
t
t r i g g e r - d e l a y
t
e x p o s u r e
t
t r i g g e r - o f f s e t
t
s t r o b e - o f f s e t
Figure 4.34: Timing diagram for the Pulsewidth controlled exposure time
62
The timing of the rising edge of the trigger pulse until to the start of exposure and strobe is equal to the timing of the camera controlled exposure time (see Section 4.4.4). In this mode however the end of the exposure is controlled by the falling edge of the trigger Pulsewidth:
The falling edge of the trigger pulse is delayed by the time t
disoinput
which is results from the
signal isolator. This signal is clocked into the FPGA which leads to a jitter of t
jitter
. The pulse is
then delayed by t
triggerdelay
by the user defined value which can be configured via camera
software. After the trigger offset time t
triggeroffset
the exposure is stopped.
4.4.5 Trigger Delay
The trigger delay is a programmable delay in milliseconds between the incoming trigger edge and the start of the exposure. This feature may be required to synchronize to external strobe with the exposure of the camera.
4.4.6 Burst Trigger
The camera includes a burst trigger engine. When enabled, it starts a predefined number of acquisitions after one single trigger pulse. The time between two acquisitions and the number of acquisitions can be configured by a user defined value via the camera software. The burst trigger feature works only in the mode "Camera controlled Exposure Time".
The burst trigger signal can be configured to be active high or active low. When the frequency of the incoming burst triggers is higher than the duration of the programmed burst sequence, then some trigger pulses will be missed. A missed burst trigger counter counts these events. This counter can be read out by the user.
The burst trigger mode is only available when TriggerMode=On. Trigger source is determined by the TriggerSource property. The timing diagram of the burst trigger mode is shown in Fig. 4.35. The timing of the "external trigger pulse input" until to the "trigger pulse internal camera control" is equal to the timing in the section Fig. 4.34. This trigger pulse then starts after a user configurable burst trigger delay time t
bursttriggerdelay
the internal burst engine, which generates n internal triggers for the shutter- and the strobe-control. A user configurable value defines the time t
burstperiodtime
between two acquisitions.
.
4.4 Trigger and Strobe 63
4 Functionality
e x t e r n a l t r i g g e r p u l s e i n p u t
t r i g g e r a f t e r i s o l a t o r
t r i g g e r p u l s e i n t e r n a l c a m e r a c o n t r o l
d e l a y e d t r i g g e r f o r s h u t t e r c o n t r o l
i n t e r n a l s h u t t e r c o n t r o l
d e l a y e d t r i g g e r f o r s t r o b e c o n t r o l
i n t e r n a l s t r o b e c o n t r o l
e x t e r n a l s t r o b e p u l s e o u t p u t
t
d - i s o - i n p u t
t
j i t t e r
t
t r i g g e r - d e l a y
t
e x p o s u r e
t
s t r o b e - d e l a y
t
d - i s o - o u t p u t
t
s t r o b e - d u r a t i o n
t
t r i g g e r - o f f s e t
t
s t r o b e - o f f s e t
d e l a y e d t r i g g e r f o r b u r s t t r i g g e r e n g i n e
t
b u r s t - t r i g g e r - d e l a y
t
b u r s t - p e r i o d - t i m e
Figure 4.35: Timing diagram for the burst trigger mode
64
MV1-D1312(IE/C)-40 MV1-D1312(IE/C)-40
Timing Parameter Minimum Maximum
t
disoinput
1 µs 1.5 µs
t
dRS422input
65 ns 185 ns
t
jitter
0 100 ns
t
triggerdelay
0 1.68 s
t
bursttriggerdelay
0 1.68 s
t
burstperiodtime
depends on camera settings 1.68 s
t
triggeroffset
(non burst mode) 400 ns 400 ns
t
triggeroffset
(burst mode) 500 ns 500 ns
t
exposure
10 µs 1.68 s
t
strobedelay
0 1.68 s
t
strobeoffset
(non burst mode) 400 ns 400 ns
t
strobeoffset
(burst mode) 500 ns 500 ns
t
strobeduration
200 ns 1.68 s
t
disooutput
150 ns 350 ns
t
triggerpulsewidth
200 ns n/a
Number of bursts n 1 30000
Table 4.13: Summary of timing parameters relevant in the external trigger mode using camera MV1­D1312(IE/C)-40
4.4 Trigger and Strobe 65
4 Functionality
MV1-D1312(IE/C)-80 MV1-D1312(IE/C)-80
Timing Parameter Minimum Maximum
t
disoinput
1 µs 1.5 µs
t
dRS422input
65 ns 185 ns
t
jitter
0 50 ns
t
triggerdelay
0 0.84 s
t
bursttriggerdelay
0 0.84 s
t
burstperiodtime
depends on camera settings 0.84 s
t
triggeroffset
(non burst mode) 200 ns 200 ns
t
triggeroffset
(burst mode) 250 ns 250 ns
t
exposure
10 µs 0.84 s
t
strobedelay
600 ns 0.84 s
t
strobeoffset
(non burst mode) 200 ns 200 ns
t
strobeoffset
(burst mode) 250 ns 250 ns
t
strobeduration
200 ns 0.84 s
t
disooutput
150 ns 350 ns
t
triggerpulsewidth
200 ns n/a
Number of bursts n 1 30000
Table 4.14: Summary of timing parameters relevant in the external trigger mode using camera MV1­D1312(IE/C)-80
66
MV1-D1312(IE/C)-100 MV1-D1312(IE/C)-100
Timing Parameter Minimum Maximum
t
disoinput
1 µs 1.5 µs
t
dRS422input
65 ns 185 ns
t
jitter
0 40 ns
t
triggerdelay
0 0.67 s
t
bursttriggerdelay
0 0.67 s
t
burstperiodtime
depends on camera settings 0.67 s
t
triggeroffset
(non burst mode) 160 ns 160 ns
t
triggeroffset
(burst mode) 200 ns 200 ns
t
exposure
10 µs 0.67 s
t
strobedelay
0 0.67 s
t
strobeoffset
(non burst mode) 160 ns 160 ns
t
strobeoffset
(burst mode) 200 ns 200 ns
t
strobeduration
200 ns 0.67 s
t
disooutput
150 ns 350 ns
t
triggerpulsewidth
200 ns n/a
Number of bursts n 1 30000
Table 4.15: Summary of timing parameters relevant in the external trigger mode using camera MV1­D1312(IE/C)-100
4.4 Trigger and Strobe 67
4 Functionality
DR1-D1312(IE)-200 DR1-D1312(IE)-200
Timing Parameter Minimum Maximum
t
disoinput
1 µs 1.5 µs
t
dRS422input
65 ns 185 ns
t
jitter
0 20 ns
t
triggerdelay
0 0.33 s
t
bursttriggerdelay
0 0.33 s
t
burstperiodtime
depends on camera settings 0.33 s
t
triggeroffset
(non burst mode) 80 ns 80 ns
t
triggeroffset
(burst mode) 100 ns 100 ns
t
exposure
10 µs 0.33 s
t
strobedelay
0 0.33 s
t
strobeoffset
(non burst mode) 80 ns 80 ns
t
strobeoffset
(burst mode) 100 ns 100 ns
t
strobeduration
200 ns 0.33 s
t
disooutput
150 ns 350 ns
t
triggerpulsewidth
200 ns n/a
Number of bursts n 1 30000
Table 4.16: Summary of timing parameters relevant in the external trigger mode using camera DR1­D1312(IE)-200
68
4.4.7 Software Trigger
The software trigger enables to emulate an external trigger pulse by the camera software through the serial data interface. It works with both burst mode enabled and disabled. As soon as it is performed via the camera software, it will start the image acquisition(s), depending on the usage of the burst mode and the burst configuration. The trigger mode must be set to external Trigger (TriggerMode = On).
4.4.8 Strobe Output
The strobe output is an isolated output located on the power supply connector that can be used to trigger a strobe. The strobe output can be used both in free-running and in trigger mode. There is a programmable delay available to adjust the strobe pulse to your application.
The strobe output needs a separate power supply. Please see Section 5.5, Fig.
4.31 and Fig. 4.32 for more information.
.
4.4 Trigger and Strobe 69
4 Functionality
4.5 Data Path Overview
The data path is the path of the image from the output of the image sensor to the output of the camera. The sequence of blocks is shown in figure Fig. 4.36.
I m a g e S e n s o r
F P N C o r r e c t i o n
D i g i t a l O f f s e t
D i g i t a l G a i n
L o o k - u p t a b l e ( L U T )
3 x 3 C o n v o l v e r
C r o s s h a i r s i n s e r t i o n
S t a t u s l i n e i n s e r t i o n
T e s t i m a g e s i n s e r t i o n
A p p l y d a t a r e s o l u t i o n
I m a g e o u t p u t
M o n o c h r o m e c a m e r a s
I m a g e S e n s o r
F P N C o r r e c t i o n
D i g i t a l O f f s e t
D i g i t a l G a i n /
R G B F i n e G a i n
L o o k - u p t a b l e ( L U T )
S t a t u s l i n e i n s e r t i o n
T e s t i m a g e s i n s e r t i o n
A p p l y d a t a r e s o l u t i o n
I m a g e o u t p u t
C o l o u r c a m e r a s
Figure 4.36: camera data path
.
70
4.6 Image Correction
4.6.1 Overview
The camera possesses image pre-processing features, that compensate for non-uniformities caused by the sensor, the lens or the illumination. This method of improving the image quality is generally known as ’Shading Correction’ or ’Flat Field Correction’ and consists of a combination of offset correction, gain correction and pixel interpolation.
Since the correction is performed in hardware, there is no performance limita­tion of the cameras for high frame rates.
The offset correction subtracts a configurable positive or negative value from the live image and thus reduces the fixed pattern noise of the CMOS sensor. In addition, hot pixels can be removed by interpolation. The gain correction can be used to flatten uneven illumination or to compensate shading effects of a lens. Both offset and gain correction work on a pixel-per-pixel basis, i.e. every pixel is corrected separately. For the correction, a black reference and a grey reference image are required. Then, the correction values are determined automatically in the camera.
Do not set any reference images when gain or LUT is enabled! Read the follow­ing sections very carefully.
Correction values of both reference images can be saved into the internal flash memory, but this overwrites the factory presets. Then the reference images that are delivered by factory cannot be restored anymore.
4.6.2 Offset Correction (FPN, Hot Pixels)
The offset correction is based on a black reference image, which is taken at no illumination (e.g. lens aperture completely closed). The black reference image contains the fixed-pattern noise of the sensor, which can be subtracted from the live images in order to minimise the static noise.
Offset correction algorithm
After configuring the camera with a black reference image, the camera is ready to apply the offset correction:
1. Determine the average value of the black reference image.
2. Subtract the black reference image from the average value.
3. Mark pixels that have a grey level higher than 1008 DN (@ 12 bit) as hot pixels.
4. Store the result in the camera as the offset correction matrix.
5. During image acquisition, subtract the correction matrix from the acquired image and
interpolate the hot pixels (see Section 4.6.2).
4.6 Image Correction 71
4 Functionality
4
4
4
31
21
3 1
4 32
3
4
1
1
2 4 14
4
3
1
3
4
b l a c k r e f e r e n c e i m a g e
1
1
1
2
- 1
2
- 2
- 1
0
1
- 1
1
- 1
0
2
0
- 1
0
- 2
0
1
1
- 2
- 2 - 2
a v e r a g e
o f b l a c k
r e f e r e n c e
p i c t u r e
=
-
o f f s e t c o r r e c t i o n m a t r i x
Figure 4.37: Schematic presentation of the offset correction algorithm
How to Obtain a Black Reference Image
In order to improve the image quality, the black reference image must meet certain demands.
The detailed procedure to set the black reference image is described in Section
6.5.
The black reference image must be obtained at no illumination, e.g. with lens aperture closed or closed lens opening.
It may be necessary to adjust the black level offset of the camera. In the histogram of the black reference image, ideally there are no grey levels at value 0 DN after adjustment of the black level offset. All pixels that are saturated black (0 DN) will not be properly corrected (see Fig. 4.38). The peak in the histogram should be well below the hot pixel threshold of 1008 DN @ 12 bit.
Camera settings may influence the grey level. Therefore, for best results the camera settings of the black reference image must be identical with the camera settings of the image to be corrected.
0 200 400 600 800 1000 1200 1400 1600
0
0.2
0.4
0.6
0.8
1
Histogram of the uncorrected black reference image
Grey level, 12 Bit [DN]
Relative number of pixels [−]
black level offset ok black level offset too low
Figure 4.38: Histogram of a proper black reference image for offset correction
72
Hot pixel correction
Every pixel that exceeds a certain threshold in the black reference image is marked as a hot pixel. If the hot pixel correction is switched on, the camera replaces the value of a hot pixel by an average of its neighbour pixels (see Fig. 4.39).
h o t
p i x e l
p
n
p
n - 1
p
n + 1
p
n
=
p
n - 1
+ p
n + 1
2
Figure 4.39: Hot pixel interpolation
4.6.3 Gain Correction
The gain correction is based on a grey reference image, which is taken at uniform illumination to give an image with a mid grey level.
Gain correction is not a trivial feature. The quality of the grey reference image is crucial for proper gain correction.
Gain correction algorithm
After configuring the camera with a black and grey reference image, the camera is ready to apply the gain correction:
1. Determine the average value of the grey reference image.
2. Subtract the offset correction matrix from the grey reference image.
3. Divide the average value by the offset corrected grey reference image.
4. Pixels that have a grey level higher than a certain threshold are marked as hot pixels.
5. Store the result in the camera as the gain correction matrix.
6. During image acquisition, multiply the gain correction matrix from the offset-corrected acquired image and interpolate the hot pixels (see Section 4.6.2).
Gain correction is not a trivial feature. The quality of the grey reference image is crucial for proper gain correction.
4.6 Image Correction 73
4 Functionality
:
7
1 0
9
79
78
7 9
4 32
3
4
1
1
9 6 84
6
1 0
1
3
4
g r a y r e f e r e n c e p i c t u r e
a v e r a g e
o f g r a y
r e f e r e n c e
p i c t u r e
)
1
1 . 2
1
0 . 9
1
1 . 2
- 2
0 . 9 1
1
- 1
1
0 . 8
1
1
0
1 . 3
0 . 8
1
0
1
1
- 2
- 2 - 2
=
1
1
1
2
- 1
2
- 2
- 1
0
1
- 1
1
- 1
0
2
0
- 1
0
- 2
0
1
1
- 2
- 2 - 2
-
)
o f f s e t c o r r e c t i o n m a t r i x
g a i n c o r r e c t i o n m a t r i x
Figure 4.40: Schematic presentation of the gain correction algorithm
Gain correction always needs an offset correction matrix. Thus, the offset correc­tion always has to be performed before the gain correction.
How to Obtain a Grey Reference Image
In order to improve the image quality, the grey reference image must meet certain demands.
The detailed procedure to set the grey reference image is described in Section
6.5.
The grey reference image must be obtained at uniform illumination.
Use a high quality light source that delivers uniform illumination. Standard illu­mination will not be appropriate.
When looking at the histogram of the grey reference image, ideally there are no grey levels at full scale (4095 DN @ 12 bit). All pixels that are saturated white will not be properly corrected (see Fig. 4.41).
Camera settings may influence the grey level. Therefore, the camera settings of the grey reference image must be identical with the camera settings of the image to be corrected.
4.6.4 Corrected Image
Offset, gain and hot pixel correction can be switched on separately. The following configurations are possible:
No correction
Offset correction only
Offset and hot pixel correction
Hot pixel correction only
Offset and gain correction
Offset, gain and hot pixel correction
74
2400 2600 2800 3000 3200 3400 3600 3800 4000 4200
0
0.2
0.4
0.6
0.8
1
Histogram of the uncorrected grey reference image
Grey level, 12 Bit [DN]
Relative number of pixels [−]
grey reference image ok grey reference image too bright
Figure 4.41: Proper grey reference image for gain correction
5
7
6
57
66
5 6
4 37
3
4
7
1
7 4 64
4
3
1
3
4
c u r r e n t i m a g e
)
5
6
6
55
65
5 4
4 37
3
4
7
1
7 4 64
4
3
1
3
4
)
1
1
1
2
- 1
2
- 2
- 1
0
1
- 1
1
- 1
0
2
0
- 1
0
- 2
0
1
1
- 2
- 2 - 2
o f f s e t c o r r e c t i o n m a t r i x
-
1
1 . 2
1
0 . 9
1
1 . 2
- 2
0 . 9
1
1
- 1
1
0 . 8
1
1
0
1 . 3
0 . 8
1
0
1
1
- 2 - 2 - 2
g a i n c o r r e c t i o n m a t r i x
=
.
c o r r e c t e d i m a g e
)
Figure 4.42: Schematic presentation of the corrected image using gain correction algorithm
In addition, the black reference image and grey reference image that are currently stored in the camera RAM can be output. Table 4.17 shows the minimum and maximum values of the correction matrices, i.e. the range that the offset and gain algorithm can correct.
Minimum Maximum
Offset correction -1023 DN @ 12 bit +1023 DN @ 12 bit
Gain correction 0.42 2.67
Table 4.17: Offset and gain correction ranges
.
4.6 Image Correction 75
4 Functionality
4.7 Digital Gain and Offset
There are two different gain settings on the camera:
Gain (Digital Fine Gain) Digital fine gain accepts fractional values from 0.01 up to 15.99. It is
implemented as a multiplication operation. Colour camera models only: There is additionally a gain for every RGB colour channel. The RGB channel gain is used to calibrate the white balance in an image, which has to be set according to the current lighting condition.
Digital Gain Digital Gain is a coarse gain with the settings x1, x2, x4 and x8. It is implemented
as a binary shift of the image data where ’0’ is shifted to the LSB’s of the gray values. E.g. for gain x2, the output value is shifted by 1 and bit 0 is set to ’0’.
The resulting gain is the product of the two gain values, which means that the image data is multiplied in the camera by this factor.
Digital Fine Gain and Digital Gain may result in missing codes in the output im­age data.
A user-defined value can be subtracted from the gray value in the digital offset block. If digital gain is applied and if the brightness of the image is too big then the interesting part of the output image might be saturated. By subtracting an offset from the input of the gain block it is possible to avoid the saturation.
4.8 Grey Level Transformation (LUT)
Grey level transformation is remapping of the grey level values of an input image to new values. The look-up table (LUT) is used to convert the greyscale value of each pixel in an image into another grey value. It is typically used to implement a transfer curve for contrast expansion. The camera performs a 12-to-8-bit mapping, so that 4096 input grey levels can be mapped to 256 output grey levels. The use of the three available modes is explained in the next sections. Two LUT and a Region-LUT feature are available in the MV1-D1312 camera series (see Section 4.8.4).
For the MV1-D1312-240-CL camera series, bits 0 & 1 of the LUT input are fixed to
0.
The output grey level resolution of the look-up table (independent of gain, gamma or user-definded mode) is always 8 bit.
There are 2 predefined functions, which generate a look-up table and transfer it to the camera. For other transfer functions the user can define his own LUT file.
Some commonly used transfer curves are shown in Fig. 4.43. Line a denotes a negative or inverse transformation, line b enhances the image contrast between grey values x0 and x1. Line c shows brightness thresholding and the result is an image with only black and white grey levels. and line d applies a gamma correction (see also Section 4.8.2).
76
a
y = f ( x )
x
x
m a x
x
0
x
1
y
m a x
b
c
d
Figure 4.43: Commonly used LUT transfer curves
4.8.1 Gain
The ’Gain’ mode performs a digital, linear amplification with clamping (see Fig. 4.44). It is configurable in the range from 1.0 to 4.0 (e.g. 1.234).
0 200 400 600 800 1000 1200
0
50
100
150
200
250
300
Grey level transformation − Gain: y = (255/1023) ⋅ a ⋅ x
x: grey level input value (10 bit) [DN]
y: grey level output value (8 bit) [DN]
a = 1.0 a = 2.0 a = 3.0 a = 4.0
Figure 4.44: Applying a linear gain with clamping to an image
4.8 Grey Level Transformation (LUT) 77
4 Functionality
4.8.2 Gamma
The ’Gamma’ mode performs an exponential amplification, configurable in the range from 0.4 to 4.0. Gamma > 1.0 results in an attenuation of the image (see Fig. 4.45), gamma < 1.0 results in an amplification (see Fig. 4.46). Gamma correction is often used for tone mapping and better display of results on monitor screens.
0 200 400 600 800 1000 1200
0
50
100
150
200
250
300
Grey level transformation − Gamma: y = (255 / 1023γ) xγ (γ 1)
x: grey level input value (10 bit) [DN]
y: grey level output value (8 bit) [DN]
γ = 1.0 γ = 1.2 γ = 1.5 γ = 1.8 γ = 2.5 γ = 4.0
Figure 4.45: Applying gamma correction to an image (gamma > 1)
0 200 400 600 800 1000 1200
0
50
100
150
200
250
300
Grey level transformation − Gamma: y = (255 / 1023γ) xγ (γ 1)
x: grey level input value (10 bit) [DN]
y: grey level output value (8 bit) [DN]
γ = 1.0 γ = 0.9 γ = 0.8 γ = 0.6 γ = 0.4
Figure 4.46: Applying gamma correction to an image (gamma < 1)
78
4.8.3 User-defined Look-up Table
In the ’User’ mode, the mapping of input to output grey levels can be configured arbitrarily by the user. There is an example file in the PFRemote folder. LUT files can easily be generated with a standard spreadsheet tool. The file has to be stored as tab delimited text file.
U s e r L U T
y = f ( x )
1 2 b i t
8 b i t
Figure 4.47: Data path through LUT
4.8.4 Region LUT and LUT Enable
Two LUTs and a Region-LUT feature are available in the MV1-D1312(IE/C) camera series. Both LUTs can be enabled independently (see Table 4.18). LUT 0 superseds LUT1.
Enable LUT 0 Enable LUT 1 Enable Region LUT Description
- - - LUT are disabled.
X don’t care - LUT 0 is active on whole image.
- X - LUT 1 is active on whole image.
X - X LUT 0 active in Region 0.
X X X LUT 0 active in Region 0 and LUT 1 active
in Region 1. LUT 0 supersedes LUT1.
Table 4.18: LUT Enable and Region LUT
4.8 Grey Level Transformation (LUT) 79
4 Functionality
When the Region-LUT feature is enabled, then the LUTs are only active in a user defined region. Examples are shown in Fig. 4.48 and Fig. 4.49.
Fig. 4.48 shows an example of overlapping Region-LUTs. LUT 0, LUT 1 and Region LUT are enabled. LUT 0 is active in region 0 ((x00, x01), (y00, y01)) and it supersedes LUT 1 in the overlapping region. LUT 1 is active in region 1 ((x10, x11), (y10, y11)).
L U T 0
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
L U T 1
x 0 0 x 1 0 x 0 1 x 1 1
y 1 0 y 0 0
y 0 1
y 1 1
Figure 4.48: Overlapping Region-LUT example
Fig. 4.49 shows an example of keyhole inspection in a laser welding application. LUT 0 and LUT 1 are used to enhance the contrast by applying optimized transfer curves to the individual regions. LUT 0 is used for keyhole inspection. LUT 1 is optimized for seam finding.
L U T 0
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
L U T 1
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
L U T 1
L U T 0
Figure 4.49: Region-LUT in keyhole inspection
.
80
Fig. 4.50 shows the application of the Region-LUT to a camera image. The original image without image processing is shown on the left-hand side. The result of the application of the Region-LUT is shown on the right-hand side. One Region-LUT was applied on a small region on the lower part of the image where the brightness has been increased.
Figure 4.50: Region-LUT example with camera image; left: original image; right: gain 4 region in the are of the date print of the bottle
.
4.8 Grey Level Transformation (LUT) 81
4 Functionality
4.9 Convolver (monochrome models only)
4.9.1 Functionality
The "Convolver" is a discrete 2D-convolution filter with a 3x3 convolution kernel. The kernel coefficients can be user-defined.
The M x N discrete 2D-convolution p
out
(x,y) of pixel pin(x,y) with convolution kernel h, scale s
and offset o is defined in Fig. 4.51.
Figure 4.51: Convolution formula
4.9.2 Settings
The following settings for the parameters are available:
Offset Offset value o (see Fig. 4.51). Range: -4096 ... 4095
Scale Scaling divisor s (see Fig. 4.51). Range: 1 ... 4095
Coefficients Coefficients of convolution kernel h (see Fig. 4.51). Range: -4096 ... 4095.
Assignment to coefficient properties is shown in Fig. 4.52.
Figure 4.52: Convolution coefficients assignment
4.9.3 Examples
Fig. 4.53 shows the result of the application of various standard convolver settings to the original image. shows the corresponding settings for every filter.
.
82
Figure 4.53: 3x3 Convolution filter examples 1
Figure 4.54: 3x3 Convolution filter examples 1 settings
4.9 Convolver (monochrome models only) 83
4 Functionality
A filter called Unsharp Mask is often used to enhance near infrared images. Fig. 4.55 shows examples with the corresponding settings.
Figure 4.55: Unsharp Mask Examples
.
84
4.10 Crosshairs (monochrome models only)
4.10.1 Functionality
The crosshairs inserts a vertical and horizontal line into the image. The width of these lines is one pixel. The grey level is defined by a 12 bit value (0 means black, 4095 means white). This allows to set any grey level to get the maximum contrast depending on the acquired image. The x/y position and the grey level can be set via the camera software. Figure Fig. 4.56 shows two examples of the activated crosshairs with different grey values. One with white lines and the other with black lines.
Figure 4.56: Crosshairs Example with different grey values
.
4.10 Crosshairs (monochrome models only) 85
4 Functionality
The x- and y-positon is absolute to the sensor pixel matrix. It is independent on the ROI, MROI or decimation configurations. Figure Fig. 4.57 shows two situations of the crosshairs configuration. The same MROI settings is used in both situations. The crosshairs however is set differently. The crosshairs is not seen in the image on the right, because the x- and y-position is set outside the MROI region.
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
( x
a b s o l u t
, y
a b s o l u t
, G r e y L e v e l )
M R O I 0
M R O I 1
( 0 , 0 )
( 1 3 1 1 , 1 0 8 1 )
M R O I 0
M R O I 1
( x
a b s o l u t
, y
a b s o l u t
, G r e y L e v e l )
M R O I 0
M R O I 1
M R O I 0
M R O I 1
Figure 4.57: Crosshairs absolute position
.
86
4.11 Image Information and Status Line (not available for DR1-D1312(IE))
There are camera properties available that give information about the acquired images, such as an image counter, average image value and the number of missed trigger signals. These properties can be queried by software. Alternatively, a status line within the image data can be switched on that contains all the available image information.
4.11.1 Counters and Average Value
Image counter The image counter provides a sequential number of every image that is output.
After camera startup, the counter counts up from 0 (counter width 24 bit). The counter can be reset by the camera control software.
Real Time counter The time counter starts at 0 after camera start, and counts real-time in units
of 1 micro-second. The time counter can be reset by the software in the SDK (Counter width 32 bit).
Missed trigger counter The missed trigger counter counts trigger pulses that were ignored by
the camera because they occurred within the exposure or read-out time of an image. In free-running mode it counts all incoming external triggers (counter width 8 bit / no wrap around).
Missed burst trigger counter The missed burst trigger counter counts trigger pulses that were
ignored by the camera in the burst trigger mode because they occurred while the camera still was processing the current burst trigger sequence.
Average image value The average image value gives the average of an image in 12 bit format
(0 .. 4095 DN), regardless of the currently used grey level resolution.
4.11.2 Status Line
If enabled, the status line replaces the last row of the image with camera status information. Every parameter is coded into fields of 4 pixels (LSB first) and uses the lower 8 bits of the pixel value, so that the total size of a parameter field is 32 bit (see Fig. 4.58). The assignment of the parameters to the fields is listed in Table 4.19.
The status line is available in all camera modes.
4 8 1 2 1 6 2 0
P r e a m b l e
F i e l d 0
0P i x e l :
1 2 3 5 6 7 9 1 0 1 1 1 3 1 4 1 5 1 7 1 8 1 9 2 1 2 2 2 3
L S B
M S B
F F 0 0 A A 5 5
F i e l d 1 F i e l d 2 F i e l d 3 F i e l d 4
L S B L S B L S B L S B L S B
M S B M S B M S B M S B M S B
Figure 4.58: Status line parameters replace the last row of the image
.
4.11 Image Information and Status Line (not available for DR1-D1312(IE)) 87
4 Functionality
Start pixel index Parameter width [bit] Parameter Description
0 32 Preamble: 0x55AA00FF
4 24 Image Counter (see Section 4.11.1)
8 32 Real Time Counter (see Section 4.11.1)
12 8 Missed Trigger Counter (see Section 4.11.1)
16 12 Image Average Value (see Section 4.11.1)
20 24 Integration Time in units of clock cycles (see Table 3.3)
24 16 Burst Trigger Number
28 8 Missed Burst Trigger Counter
32 11 Horizontal start position of ROI (Window.X)
36 11 Horizontal end position of ROI
(= Window.X + Window.W - 1)
40 11 Vertical start position of ROI (Window.Y).
In MROI-mode this parameter is 0.
44 11 Vertical end position of ROI (Window.Y + Window.H - 1).
In MROI-mode this parameter is the total height - 1.
48 2 Trigger Source
52 2 Digital Gain
56 2 Digital Offset
60 16 Camera Type Code (see Table 4.20)
64 32 Camera Serial Number
Table 4.19: Assignment of status line fields
Camera Model Camera Type Code
MV1-D1312-40-G2-12 225
MV1-D1312-80-G2-12 226
MV1-D1312-100-G2-12 227
DR1-D1312-200-G2-8 229
MV1-D1312IE-40-G2-12 246
MV1-D1312IE-80-G2-12 249
MV1-D1312IE-100-G2-12 248
DR1-D1312IE-200-G2-8 222
Table 4.20: Type codes of MV1-D1312(IE/C)-G2 and DR1-D1312(IE)-G2 camera series
88
4.12 Test Images
Test images are generated in the camera FPGA, independent of the image sensor. They can be used to check the transmission path from the camera to the frame grabber. Independent from the configured grey level resolution, every possible grey level appears the same number of times in a test image. Therefore, the histogram of the received image must be flat.
A test image is a useful tool to find data transmission errors that are caused most often by a defective cable between camera and frame grabber in CameraLink
®
cameras. In Gigabit Ethernet cameras test images are mostly useful to test the grabbing software.
The analysis of the test images with a histogram tool gives the correct result at a resolution of 1024 x 1024 pixels only.
4.12.1 Ramp
Depending on the configured grey level resolution, the ramp test image outputs a constant pattern with increasing grey level from the left to the right side (see Fig. 4.59).
Figure 4.59: Ramp test images: 8 bit output (left), 10 bit output (middle),12 (right)
4.12.2 LFSR
The LFSR (linear feedback shift register) test image outputs a constant pattern with a pseudo-random grey level sequence containing every possible grey level that is repeated for every row. The LFSR test pattern was chosen because it leads to a very high data toggling rate, which stresses the interface electronic. In the histogram you can see that the number of pixels of all grey values are the same.
Please refer to application note [AN026] for the calculation and the values of the LFSR test image.
4.12.3 Troubleshooting using the LFSR
To control the quality of your complete imaging system enable the LFSR mode, set the camera window to 1024 x 1024 pixels (x=0 and y=0) and check the histogram. If your frame grabber application does not provide a real-time histogram, store the image and use a graphic software tool to display the histogram.
In the LFSR (linear feedback shift register) mode the camera generates a constant pseudo-random test pattern containing all grey levels. If the data transmission is error free, the
4.12 Test Images 89
4 Functionality
Figure 4.60: LFSR (linear feedback shift register) test image
histogram of the received LFSR test pattern will be flat (Fig. 4.61). On the other hand, a non-flat histogram (Fig. 4.62) indicates problems, that may be caused either by the a defective camera or by problems in the grabbing software.
.
90
Figure 4.61: LFSR test pattern received and typical histogram for error-free data transmission
Figure 4.62: LFSR test pattern received and histogram containing transmission errors
In robots applications, the stress that is applied to the camera cable is especially high due to the fast movement of the robot arm. For such applications, special drag chain capable cables are available. Please contact the Photonfocus Support for consulting expertise. .
.
4.12 Test Images 91
4 Functionality
4.13 Double Rate (DR1-D1312(IE) only)
The Photonfocus DR1 cameras use a proprietary modulation algorithm to cut the data rate by almost a factor of two. This enables the transmission of high frame rates over just one Gigabit Ethernet connection, avoiding the complexity and stability issues of Ethernet link aggregation. The algorithm is lossy but no image artefacts are introduced, unlike for example the JPEG compression. It is therefore very well suited for most machine vision applications except for measuring tasks where sub-pixel precision is required.
Double rate modulation can be turned off for debugging purposes.
The modulated image is transmitted in mono 8 bit data resolution.
The modulation is run in real-time in the camera’s FPGA. A DLL for the demodulation of the image for SDK applications is included in the GEV-Player software package that can be downloaded from Photonfocus (see also 6).
The modulation factor is independent of the image content. The modulated image has the same number of rows as the unmodulated image. The required image width (number of bytes in a row) for the modulated image can be calculated as follows (value can also be read from a camera property), see also Table 4.21:
w
mod
= ceil(w/64) + w/2 + 2
92
Width unmodulated Width modulated
544 283
576 299
608 316
640 332
672 349
704 365
736 382
768 398
800 415
832 431
864 448
896 464
928 481
960 497
992 514
1024 530
1056 547
1088 563
1120 580
1152 596
1184 613
1216 629
1248 646
1280 662
1312 679
Table 4.21: Width of modulated image in double rate mode
4.13 Double Rate (DR1-D1312(IE) only) 93
4 Functionality
94
5
Hardware Interface
5.1 GigE Connector
The GigE cameras are interfaced to external components via
an Ethernet jack (RJ45) to transmit configuration, image data and trigger.
a 12 pin subminiature connector for the power supply, Hirose HR10A-10P-12S (female) .
The connectors are located on the back of the camera. Fig. 5.1 shows the plugs and the status LED which indicates camera operation.
P o w e r S u p p l y a n d I / O C o n n e c t o r
S t a t u s L E D
E t h e r n e t J a c k ( R J 4 5 )
Figure 5.1: Rear view of the GigE camera
5.2 Power Supply Connector
The camera requires a single voltage input (see Table 3.5). The camera meets all performance specifications using standard switching power supplies, although well-regulated linear power supplies provide optimum performance.
It is extremely important that you apply the appropriate voltages to your camera. Incorrect voltages will damage the camera.
A suitable power supply can be ordered from your Photonfocus dealership.
For further details including the pinout please refer to Appendix A.
95
5 Hardware Interface
5.3 Status Indicator (GigE cameras)
A dual-color LED on the back of the camera gives information about the current status of the GigE CMOS cameras.
LED Green Green when an image is output. At slow frame rates, the LED blinks with the
FVAL signal. At high frame rates the LED changes to an apparently continuous green light, with intensity proportional to the ratio of readout time over frame time.
LED Red Red indicates an active serial communication with the camera.
Table 5.1: Meaning of the LED of the GigE CMOS cameras
5.4 Power and Ground Connection for GigE G2 Cameras
The interface electronics is isolated from the camera electronics and the power supply including the line filters and camera case. Fig. 5.2 shows a schematic of the power and ground connections.
.
96
P o w e r S u p p l y
2
P O W E R _ R E T U R N
1
C A S E
G N D
I n t e r n a l P o w e r S u p p l y
D C / D C
V C C _ 3
+
P O W E R
R X R S 4 2 2
I S O _ I N C 0 _ P
I S O _ I N C 0 _ N
I S O _ I N C 1 _ P
I S O _ I N C 1 _ N
I S O _ I N 0
I S O _ I N 1
I S O _ O U T 0
I S O _ O U T 1
I s o l a t e d I n t e r f a c e
C a m e r a E l e c t r o n i c
I S O L A T O R
I S O _ G N D
I S O _ P W R
1 2
1 2 p o l . H i r o s e C o n n e c t o r
6
8
3
9
7
1 0
1 1
4
5
+
I / O a n d T r i g g e r I n t e r f a c e
D C / D C
D C / D C
V C C _ 2
V C C _ 1
E S D
P r o t e c t i o n
E S D
P r o t e c t i o n
C a m e r a E l e c t r o n i c
L i n e
F i l t e r
Y O U R _ G N D
Y O U R _ P W R
+
H i r o s e C o n n e c t o r
C A S E
G N D
C a m e r a
Figure 5.2: Schematic of power and ground connections
5.4 Power and Ground Connection for GigE G2 Cameras 97
5 Hardware Interface
5.5 Trigger and Strobe Signals for GigE G2 Cameras
5.5.1 Overview
The 12-pol. Hirose power connector contains two external trigger inputs, two strobe outputs and two differential RS-422 inputs. All inputs and outputs are connected to the Programmable Logic Controller (PLC) (see also Section 5.6) that offers powerful operations.
The pinout of the power connector is described in Section A.1.
ISO_INC0 and ISO_INC1 RS-422 inputs have -10 V to +13 V extended common mode range.
ISO_OUT0 and ISO_OUT1 have different output circuits (see also Section 5.5.2).
A suitable trigger breakout cable for the Hirose 12 pol. connector can be ordered from your Photonfocus dealership.
Simulation with LTSpice is possible, a simulation model can be downloaded from our web site www.photonfocus.com on the software download page (in Support section). It is filed under "Third Party Tools".
Fig. 5.3 shows the schematic of the inputs and outputs. All inputs and outputs are isolated. ISO_VCC is an isolated, internally generated voltage.
.
98
Loading...