If you need technical support, please contact your MOBOTIX-dealer. If your questions cannot be answered
immediately, your reseller will forward your inquiries through the appropriate channels to ensure a quick
response.
If you have Internet access, you can download additional documentation and software updates from the
MOBOTIX-helpdesk. Please visit:
www.mobotix.com
5 / 153
Imprint
Imprint
This document is part of products by MOBOTIX AG, hereinafter referred to as the manufacturer, and describes
the use and configuration of AI-TECHApps on the camera and its components.
Changes, errors and misprints excepted.
Copyright
This document is protected by copyright. The information contained herein may not be passed on to third
parties without the express permission of the manufacturer. Contraventions will result in claims for damages.
Patent and copy protection
In the event that a patent, utility model or design protection is granted, all rights are reserved. Trademarks
and product names are trademarks or registered trademarks of their respective companies or organizations.
Adress
MOBOTIX AG
Kaiserstrasse
67722 Langmeil
Germany
Tel.: +49 6302 9816-0
E-Mail: info@mobotix.com
Internet: www.mobotix.com
Support
See Support, p. 5.
7 / 153
Legal information
Legal information
special export regulations
Cameras with thermal imaging sensors ("thermal imaging cameras") are subject to the special sanctions and
export regulations of the USA, including the ITAR (International Traffic in Arms Regulation):
n According to the current US sanctions and export regulations, cameras with thermal image sensors or
parts thereof may not be delivered to countries or regions where the US has imposed an embargo,
unless a special exemption has been granted. At present, this applies in particular to the states/regions
of Syria, Iran, Cuba, North Korea, Sudan and Crimea. Furthermore, the corresponding ban on delivery
also applies to all persons and institutions listed in "The Denied Persons List" (see www.bis.doc.gov >
Policy Guidance > Lists of Parties of Concern; https://www.treasury.gov/resource-center-
/sanctions/sdn-list/pages/default.aspx).
n These cameras and the thermal imaging sensors used therein shall not be used in or be used in the
design, development or production of nuclear, biological or chemical weapons.
Legal aspects of video and audio recording:
When using MOBOTIX AG products, the data protection regulations for video and audio recording must be
observed. epending on the state law and the place of installation, the recording of video and audio data may
be subject to special requirements or prohibited. All users of MOBOTIX-Products are therefore requested to
inform themselves about the currently valid regulations and to comply with them. MOBOTIX AG assumes no
responsibility for product use that does not conform to legality.
Declaration of conformity.
The products of MOBOTIX AG are certified according to the applicable directives of the EU and other countries. You can find MOBOTIX AG's declarations of conformity for its products on the following website: www.-
mobotix.com under Support> Download Center> Documentation> Certificate & Declaration of Conformity.
Declaration of RoHS
The products of MOBOTIX AG comply with the requirements resulting from §5 ElektroG and the RoHS Directive 2011/65/EU, as far as they fall within the scope of these regulations. (die RoHS-declaration of MOBOTIX
are available on the following website: www.mobotix.com under Support>Download Center>
Documentation> Broschures & Instructions > Certificates).
Disposal
Electrical and electronic products contain many recyclable materials. Therefore, dispose MOBOTIX at the end
of their service life in accordance with the applicable legal provisions and regulations (e.g. return to a municipal collection point). -Products must not be disposed of with household waste! Dispose of any battery in
the product separately from the product (the respective product manuals contain a corresponding note if the
product contains a battery).
8 / 153
Legal information
Disclaimer
MOBOTIX AG is not liable for damage caused by improper handling of its products, failure to observe the
operating instructions and the relevant regulations. Our General Terms and Conditions apply, the current version of which can be found to download at www.mobotix.com .
9 / 153
4
AI-Dashboard embedded for data
management
The data generated by AI-PEOPLE, AI-CROWD andAI-OVERCROWD can be stored on board
on the SD card of the camera through AI-Dashboard embedded.
The data can be visualized in two different ways:
n In tabular form, as a sequence of events. In this case (optionally) a sequence of
images associated to the events is available (not for AI-CROWD).
n The graphics related to the events generated by the plugins, with the possibility to
personalize the time interval and the time resolution.
11 / 153
AI-Dashboard embedded for data management
AI-Dash - configuration overview
Fig. 1: Sequence of events
Fig. 2: Graphic
AI-Dash - configuration overview
The dashboard in general is devided into the following sections:
n The main menu on top
n The live view area on the left
n The parameter section on the right
12 / 153
Fig. 3: Overview of the dashboard
Menu configuration
AI-Dashboard embedded for data management
AI-Dash - configuration overview
Fig. 4: Menu configuration
Note: Any changes made via AI-Config will only be applied to the application after the configuration has
been sent using the function in this panel.
The following functions are available:
Send configuration: the configuration will be send and stored to the application.
Reload configuration: the current configuration will be loaded from the application .
Save on file: The configuration can be downloaded an saved as file in JSON format.
Load from file: The saved configuration can be loaded from a file in JSON format.
Test: sends a test event to all the enabled channels in order to verify that the configuration of the channels
has been successful. Once clicked, simply click on the "Test" button in the window that will appear next. To
exit the test mode, simply click anywhere on the screen.
13 / 153
AI-Dashboard embedded for data management
AI-Dash - configuration overview
Menu Administration
The administrator password to access the dashboard is the same as the cameras administrator
password.
The default passwords for the app-administration are:
for AI-RETAIL: AIRetail3
for AI-BIO: AIBiolight
for AI-SECURITY: AISecurity3
for AI-TRAFFIC: AITraffic
Fig. 5: Menu Administration
The following functions are available:
Change configurator password: a configurator can load a configuration but change the parameters
Change admin password: an administrator can fully edit all parameters.
Menu License
Licensing is available in MxManagementCenter only .
14 / 153
Highlight objects in the foreground
Fig. 6: Highlight objects in foreground
AI-Dashboard embedded for data management
AI-Dash - configuration overview
1. Activate Highlights objects in the foreground to verify if the configuration of the low level parameters is
correct.
15 / 153
AI-Dashboard embedded for data management
AI-Dash - configuration overview
Scheduler
16 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Fig. 7: Scheduler
In many real installations, applications do not always need to be active. It may be required, for example, to
enable the processing only from Monday to Friday, or every day at a certain time interval.
For this reason AI-RETAIL can be scheduled by configuring the periods in which they must be active and
those in which they do not.
AI-Dash - Administrator parameters
For more experienced users, it is also possible to change the administrator parameters.
In this section, you can modify the low-level parameters that are required for background updating and
extraction of the foreground mask. It is generally suggested that you do not change these parameters.
However, the modification of these requires significant experience, so it was decided to protect this
configuration with a password.
The administrator password to access the dashboard is the same as the cameras administrator
password.
The default passwords for the app-administration are:
for AI-RETAIL: AIRetail3
for AI-BIO: AIBiolight
for AI-SECURITY: AISecurity3
for AI-TRAFFIC: AITraffic
Fig. 8: Administrator login with password
17 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Face detection (AI-BIO only)
Fig. 9: AI-BIO Face detection
Scaling factor: Growing factor of the window for the face detection (default 1,1). By increasing the value of
this parameter (max 2,0) you will make the algorithm faster but on the other hand it will became less
sensible. Vice versa, by decreasing this value (min 1,01) the algorithm will become more sensible but also
slower.
Number of classification stages: (default 25): Decreasing this value (it’s suggested to not set it less than 18),
the algorithm sensitivity is increased, but also the false positive rate is increased.
Minimum number of rectangles:Minimum number of rectangles to consider an object as a detected face
(default 1 - maximum sensibility). Decreasing this value, the algorithm sensitivity is increased, but also
increase the false positive (min 1) rate is increased. On the other hand, if this value is excessively increased,
the miss rate may increase (it’s suggested to not go further the value 10).
Shift step: Shift in pixels of the window for the face detection (default 2). Decreasing this value, the
algorithm sensitivity and the processing time are increased(min 1). On the other hand, increasing this value,
the sensitivity and the processing time may be reduced (it’s suggested to not go further the value 10).
18 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Gaussian filtering
Fig. 10: Gaussian filtering
Image pre-processing by gaussian filtering eliminates the acquisition noise on the image and makes
subsequent operations for object detection easier and more effective. The
default kernel is 3x3, while other possible values are 5x5 and 7x7.
Gaussian filtering can also deactivated.
19 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Background
Fig. 11: Background
The background settings allow modeling and updating the background by setting the entry time of an object
in the background
The output is an image in the YUV420 color space which represents the static part of the scene shot; it is
then used to determine the dynamic part of the current frame, that is the foreground mask.
Update latency (s): Time period in seconds after a change in the scene must definitely become part of the
background
Threshold (YUV): A comparison is made between the current frame and the background image of the
previous instant: if the pixel of the frame is "close" to the corresponding pixel of the background, then it is
not a foreground pixel; otherwise, that pixel will be white in the foreground mask. The comparison is made
separately on each of the three YUV channels
Fig. 12: Example background extraction using a threshold for each of the three YUV channels.
20 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Update type: By specifying «Accurate (grayscale)» or «Accurate (color)» as background update type, it is
possible to use a state of the art self learning algorithm for extracting the foreground mask. The «grayscale»
version uses only the Y color channel, while the «color» version uses all the channels; of course, the first is
more efficient, while the second is more effective. Moreover, the shadow removal can be enabled only with
the color version.
Morphological filtering
Fig. 13: Morphological filtering
Application of morphological erosion, dilation and another erosion operators to improve the foreground
mask
Enable erosion (noise filtering): eliminates the spurious white pixels caused by noise
Enable dilation: fills holes and reinforces the union of weakly connected regions.
Enable erosion: allows to recover the original size of the objects.
It is possible to choose the shape of the kernel to be used (rectangular, diamond, octagon, disk), as well as
the dimension in terms of width and height (rectangular) or radius (diamond, octagon, disk).
21 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Tracking (AI-BIO, AI-SECURITY only)
Fig. 14: Object tracking
Fig. 15: Tracking (AI-BIO and AI-SECURITY only)
Tracking of objects in different frames depending on the position in the image
The objective is to find the correspondence between the detected object to the preceding frame (t-1) and
the blob identified at the current frame (t), solving, in this way, problems related to occlusions (for example
trees)
Maximum radius: Maximum movement of an object between two successive frames. A too small value may
cause frequent switches of the ID, while a too large value may cause the assignment of the same ID to
different objects. The value is expressed as a fraction of the frame diagonal.
Max ghost time (ms): Maximum time (in milliseconds) for which a detected object can assume the status of
ghost, namely it can be stored and retrieved in case of occlusion.
22 / 153
Small objects filtering (AI-SECURITY only)
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Fig. 16: Small objects filtering (AI-SECURITY only)
Elimination of blobs that are too small, too large or abnormally shaped based on pixel dimensions
Use aspect ratio: Check to activate the aspect ration settings. The settings allow to detect, for example, only
people or just cars.
Minimum Aspect Ratio: define the minimum value of the relationship between height and width.
Maximum Aspect Ratio: define the maximum value of the relationship between height and width.
Enable filtering: Check to activate the filtering settings.You can define minimum and maximum values for
the height and width of a blob by drawing a couple of rectangles on the image.
Maximum width and height: define the maximum value of the object size.
Miniumum width and height: define the minimum value of the object size.
23 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Filtering actual size (AI-SECURITY only)
To use this filter it is necessary first to calibrate the camera and the algorithm, to be able to
calculate the relation that allows to deduce the real dimensions of an object starting from the pixel
dimensions (see Camera Calibration (AI-SECURITY only), p. 25)
Fig. 17: Filtering actual size (AI-SECURITY only)
This filter allows the elimination of blobs that are too short or too tall based on actual size
Enable filtering: Check to activate the filtering settings.You can define minimum and maximum values for
the height and width of a blob.
Maximum height: define the maximum height of a blob.
Minimum height: define the minimum height of a blob.
24 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Camera Calibration (AI-SECURITY only)
The camera calibration has to be done before filtering the actual size (see Filtering actual size (AI-
SECURITY only), p. 24)
Fig. 18: Camera Calibration (AI-SECURITY only)
This filter allows the elimination of blobs that are too short or too tall based on actual size.
Camera height (m): mounting height of the camera in meters.
Horizontal angle: cameras horizontal angle of view in degree. It is available on the datasheet of a fixed focal
cameras, to be calculated for varifocal cameras.
Vertical angle: cameras vertical angle of view in degree. It is available on the datasheet of a fixed focal
cameras, to be calculated for varifocal cameras.
25 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Algorythm parameters (AI-SECURITY) only
Algorithm calibration
Algorythm calibration provides a collection of samples to train an algorithm that calculates the actual
dimensions from those in pixels
Fig. 19: Algorythm calibration (AI-SECURITY only)
This filter allows the elimination of blobs that are too short or too tall based on actual size.
Show training data: Check to show trainiung data in the preview image.
Rotation (degrees): Camera rotation in reference to the horizontal plane.
Add element: ask a person of known height to move in different positions in the scene and at different
distances from the camera. Drawing a rectangle around the person every time he stops.
Delete element: Click to delete the selected element.
Height (m):Height of the element in meters.
26 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Shadow removal(AI-SECURITY only)
The algorithm for shadow removal is based on the analysis of the chromaticity difference between the
background and the current frame, since the shadows typically make the pixels darker.
Fig. 20: Shadow removal (AI-SECURITY only)
Enable shadow removal: Check to activate the shadow removal settings.
Min fg-to-bg brightness ratio: Decreasing this value means the algorythm gets mor sensitive.
Max fg-to-bg brightness ratio: Increasing this value means the algorythm gets mor sensitive.
Max hue difference: Increasing this value means the algorythm gets mor sensitive and therefore removes
also strong shadows.
Max saturation increase: Increasing this value means the algorythm gets mor sensitive and therefore
removes also strong shadows.
27 / 153
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Brightness control
Fig. 21: Brightness control
When sudden changes in brightness occur in the scene, the difference between the current frame and the
background instantly becomes very high, generating a lot of noise on the foreground mask. The detection of
this abnormal situation allows application to stop for a few moments the processing, allowing the
background to automatically adapt to the brightness of the scene change.
For efficiency reasons the algorythm works on a grid built on the image and evaluates the differences in
brightness only in grid intersections.
28 / 153
Performance
Fig. 22: Performance
AI-Dashboard embedded for data management
AI-Dash - Administrator parameters
Performance optimizations to make the algorithms more efficient.
Spatial decimation: consists in reducing the resolution at which the algorithm processes images. It is
possible to reduce the size by a factor of 2 or 4, processing an image that is respectively a quarter or a
sixteenth compared to the initial one.
Temporal decimation: allows to "discard" some frames, processing a picture every K milliseconds.
ROI: allows to perform the image processing only in the region drawn by the user.
Blob detection (AI-SECURITY only)
Fig. 23: Blob detection (AI-SECURITY) only
Tight bounding box (horizontal): consists in reducing the horizontal dimension of the bounding box by
centering it with respect to the centroid.
Tight bounding box (vertical): consists in reducing the vertical dimension of the bounding box by centering
it with respect to the centroid.
Blob number limit: allows to limit the number of blobs detected by the plugin in a single frame.
29 / 153
AI-Dashboard embedded for data management
Event notification
Stream
Fig. 24: Stream
Ability to process a rotated image compared to that acquired by the camera. This operation, however, it may
be interesting in the case where, for example, you want to install a camera in portrait mode, so as to take
advantage of the horizontal opening angle of the camera to frame a person standing.
Device name:change the name of the stream
Rotation (degrees): image can be rotated by 90 °, 180 ° and 270 °.
Event notification
All AI-Apps can notify each event simultaneously to multiple recipients. You can enable and configure each
recipient in the specific section of the events panel.
You can also specify for each event the channel on which you want to be notified. In the configuration
section it is possible to enable the sending of only the desired events. This way you can completely
customize the events sending. You can choose which event to send for each channel.
30 / 153
AI-Dashboard embedded for data management
Event notification
AI-RETAIL Events
Counting event is generated every time a person crosses a people counting sensor. The event gives
information about the number of persons which crossed the sensor simultaneously and related to the total
number of crossings counted by the sensor since the last reset. It can be sent with and without images.
Aggregate event is generated when the number of persons (IN-OUT) is greater than a threshold configured
by the user. Such event can be used as an alarm or like an advertisement of overcrowding, in case of a single
entrance/exit gate. It can be sent with and without images.
Crowd event is generated periodically, with a period specified by the user during plugin configuration,
giving an estimation of the average number of persons in the considered period. Such event can be used for
collecting statistics about the retail shop. It can be sent ONLY without images.
Overcrowd event is generated when the estimated number of persons in the sensor is greater than a
threshold configured by the user. Such event can be used as an alarm or like an advertisement of
overcrowding. It can be sent with and without images.
Test event is generated by the user, clicking on the specific button on AI-Config. It can be used to verify the
communication with the event collectors.
AI-BIO Events
Bio event is generated when a person, which face has been detected, leaves the scene. The event gives
information about the gender, the age category and the persistence time of each person in front of the
camera. It can be sent with and without images.
Digital_Signage event is generated when persons are detected in front of the camera, after a minimum
period of persistence. The event gives information about the average gender and age of the persons . It can
be sent with and without images.
Test event is generated by the user, clicking on the specific button on AI-Config. It can be used to verify the
communication with the event collectors.
AI-SECURITY Events
Sterile_Zone is generated when an intruder persists in a sterile zone. The event gives information about the
position of the object which generated the alarm. It can be sent with and without images.
Crossing_Line eventis generated when an object crosses a line. The event gives information about the
position of the object which generated the alarm. It can be sent with and without images.
Intrusion_Pro eventis generated when an object crosses a multiple line. The event gives information about
the position of the object which generated the alarm. It can be sent with and without images.
Lost event is generated when an object is abandoned or removed in a lost sensor. The event gives
information about the position of the object which generated the alarm. It can be sent with and without
images.
Loitering event is generated when a loitering behavior is detected in a loitering sensor. The event gives
information about the position of the object which generated the alarm. It can be sent with and without
images.
Test event is generated by the user, clicking on the specific button on AI-Config. It can be used to verify the
communication with the event collectors.
31 / 153
AI-Dashboard embedded for data management
Event notification
AI-TRAFFIC Events
Sterile_Zone is generated when an intruder persists in a sterile zone. The event gives information about the
position of the object which generated the alarm. It can be sent with and without images.
Crossing_Line eventis generated when an object crosses a line. The event gives information about the
position of the object which generated the alarm. It can be sent with and without images.
Intrusion_Pro eventis generated when an object crosses a multiple line. The event gives information about
the position of the object which generated the alarm. It can be sent with and without images.
Lost event is generated when an object is abandoned or removed in a lost sensor. The event gives
information about the position of the object which generated the alarm. It can be sent with and without
images.
Loitering event is generated when a loitering behavior is detected in a loitering sensor. The event gives
information about the position of the object which generated the alarm. It can be sent with and without
images.
Test event event is generated by the user, clicking on the specific button on AI-Config. It can be used to
verify the communication with the event collectors.
32 / 153
Image saving options
AI-Dashboard embedded for data management
Event notification
Fig. 25: Images saving options
Embed metadata:activate to enable the sending of annotated images (with sensors and bounding boxes for
example) associated to the events.
Line thickness: specify the thickness of bounding boxes and the font size of the superimposed strings.
Font size: specify the font size of the superimposed strings.
Modified bounding box:when enabled a bounding box is drawn, which allows to observe the object
orientation in the image.
Timestamp overlay: shows the date and hour overlay on the top right of the image.
Finally, since many event managers allows to send images in a time interval pre and post event, it is possible
to specify the buffer size in frames and the time interval between consecutive frames saved in the buffer.
ATTENTION – The buffer size configuration and the temporal decimation with whom the frames are
stored impose a limit on the number to PRE and POST seconds of images that can be associated to
the events
33 / 153
AI-Dashboard embedded for data management
Event notification
Embedded AI-Dash
Fig. 26: Embedded AI-Dash
Enable AI-Dashboard embedded: activate to send events to AI-Dashboard embedded.
Embedded AI-Dash folder: folder in which theAI-Dashboard embedded database is created.
Maximum size:maximum size (in MB) that AI-Dashboard embedded can occupy on the device.
Send images:activate to send event images to AI-Dashboard embedded database
# Sec pre-event: Number of seconds of images before event.
# Sec post-event: Number of seconds of images after event.
34 / 153
External AI-Dash
AI-Dashboard embedded for data management
Event notification
Fig. 27: External AI-Dash
Enable send sending events: activate to send events to external AI-Dash.
IP: IP address of the server on which AI-Dash is installed (both server version and a cloud version).
Port: Port on which AI-Dashlistens.
AI-Dash ID: Once created on AI-Dash PRO the identifier related to your site and company, you can insert it in
this field. For more detail, please refer to the documentation of AI-Dash PRO.
Backward compatibility with AI-Dash: Enable this field if you have AI-Dash and not the new AI-Dash PRO
(for more details please refer to the custom server notification in the following).
Send images:activate to send event images to AI-Dash database
# Sec pre-event: Number of seconds of images before event.
# Sec post-event: Number of seconds of images after event.
ATTENTION! To receive events, it may be necessary to disable the firewall
35 / 153
AI-Dashboard embedded for data management
Event notification
Wisenet WAVE
Fig. 28: Wisenet WAVE
Enable send sending events: activate to send events to Wisenet Wave.
IP: IP address of Wisenet WAVE VMS.
Port: Port number of the Wisenet WAVE VMS.
Username: Username to authenticate the Wisenet WAVE VMS.
Password:Password to authenticate to the Wisenet WAVE VMS.
Use HTTPS:activate to send events through https
The event sending to Wisenet WAVE is not supported for Crowd events
36 / 153
Hanwha SSM
AI-Dashboard embedded for data management
Event notification
Fig. 29: Hanwha SSM
Enable send sending events: activate to send events to Hanwha SSM.
IP: IP address of the server on which SSM is installed
Port: Port number of the SSM.
Device GUID: device identifier to read on SSM.
Set the server timezone:SSM server timezone.
The sending of events to Wisenet SSM is not supported for Crowd events.
37 / 153
AI-Dashboard embedded for data management
Event notification
Text Sender Configuration
This mechanism makes the app integrated with the Wisenet NVR.
Fig. 30: Text sender configuration
Enable send sending events: activate to send events.
IP: IP address of the server on which AI-Dash is installed in both the server version and a cloud version.
Port: Port number.
Path: Path for the POST to the receiving server;.
MIME type: MIME Type with which the message will be transmitted.
charset: Character set for the message text.
Use URL Encode: indicates that the message is encoded using URL Encode for sending.
Message Format: message texrt sent to the server. Thes palceholders can be used in the message text
n event name: %e
n device name: %d
n sensor name: %s
n date: %t (format DD / MM / YYYY
Use URL Encode: indicates that the message is encoded using URL Encode for sending.
38 / 153
AI-Dashboard embedded for data management
The sending of text events is not supported for Crowd events.
Digital output
Event notification
Fig. 31: Digital output
Enable send sending events: activate to send event via a digital output.
Single pulse duration (ms): duration of a single pulse in miliseconds.
Pulse Interval (ms): Time in ms between two pulses.
Number of pulses: Number of pulses sent through the alarm out port.
Device: Device on which the application is running.
Pin: Pin you want to use on the device.
Sending of events to digital inputs is not supported for Crowd events.
39 / 153
AI-Dashboard embedded for data management
Event notification
HTTP I/O
Fig. 32: HTTP I/O
Enable send sending events: activate to send event via generic I / O (for example to call the CGIs of the
Wisenet NVR).
IP: IP address of the remote I / O.
Port: port on which is listening on the remote I / O.
Path: Path of the remote I / O.
Username: user name to connect to the remote I / O.
Password: Password to connect to the remote I / O.
Parameters: query string with all the required parameters. The format allows to add information about the
event. It’s necessary to add the following tags to the message
n event name: %e
n device name: %d
n sensor name: %s
n date: %t (format DD / MM / YYYY
Use HTTPS: : if checked, send through HTTPS.
40 / 153
AI-Dashboard embedded for data management
Event notification
example to set 10 seconds of duration of an alarm on the Hanwha NVR by using Hanwha SUNAPI:
Enable send sending events: activate to send event via email.
Sender: e-mail address of the sender.
Username:sender's user name for SMTP server access.
Password: sender's password for SMTP server access.
SMTP Server: address of the SMTP server.
SMTP port: port number of the SMTP server.
Recipients: You can enter multiple email addresses separated by a semicolon.
# Sec pre-event: Number of seconds of images before event.
# Sec post-event: Number of seconds of images after event.
The sending of events by e-mail is not supported for Crowd events
42 / 153
Sending event to Milestone
AI-Dashboard embedded for data management
Event notification
Fig. 34: Sending event to Milestone
Enable send sending events: activate to send event to Milestone XProtect®
IP server: IP address of the server on which you installed Milestone XProtect®, both server version and a
cloud version.
Server port:Port number to listen for Milestone XProtect® events.
IP device: IP address of the device.
Timezone: Timezone of Milestone XProtect® servers.
The sending of events to Milestone XProtect® is not supported for Crowd events.
43 / 153
AI-Dashboard embedded for data management
Event notification
Sending event to Arteco EVERYWHERE
Fig. 35: Sending event to Arteco EVERYWHERE
Enable send sending events: activate to send event to Arteco EVERYWHERE.
IP: IP address of the server on which you installed Milestone Arteco EVERYWHERE, both server version and a
cloud version.
Server port: Port number to listen for Arteco EVERYWHERE.
Username: Username for login to Arteco EVERYWHERE server.
Password: Password for login to Arteco EVERYWHERE server.
Number of output: Output number associated with the event.
The sending of events to Arteco EVERYWHERE is not supported for Crowd events.
44 / 153
Sending event to Arteco NEXT
AI-Dashboard embedded for data management
Event notification
Fig. 36: Sending event to Arteco NEXT
Enable send sending events: activate to send event to Arteco NEXT.
IP: IP address of the server on which you installed Milestone Arteco NEXT, both server version and a cloud
version.
Server port: Port number to listen for Arteco NEXT server.
Username: Username for login to Arteco NEXT server.
Password: Password for login to Arteco NEXTserver.
Connector ID: Identification of the connector defined in Arteco NEXT for sending event notifications.
Camera ID: Identification of the connector defined in Arteco NEXT for sending event notifications.
Description: Information that will be displayed in Arteco NEXT related to the application of video analysis.
The sending of events to Arteco NEXT is not supported for Crowd events.
45 / 153
AI-Dashboard embedded for data management
Event notification
Sending event to Avigilon POS
Fig. 37: Sending event to Avigilon POS
Enable send sending events: activate to send event to Avigilon POS.
Port: Port number on which the Avigilon server is listening.
Beginning event string: characters at the beginning of the event.
Ending event string: characters at the end of the event.
The sending of events to Avigilon POS is not supported for Crowd events.
46 / 153
Sending event to FTP server
AI-Dashboard embedded for data management
Event notification
Fig. 38: Sending event to FTP server
Enable send sending events: activate to send event to a FTP server.
IP: IP address of the FTP server.
Port: port number of the FTP server.
Username: Username to authenticate to the FTP server.
Password: Password to authenticate to the FTP server.
Path of destination: Path, defined from the root folder, FTP, to transfer the files to the server.
Send images: check to include images in the event sent.
47 / 153
AI-Dashboard embedded for data management
Event notification
Remote server
Fig. 39: Sending event to Remote server
Enable send sending events: activate to send event to remote server.
IP Server: IP address of the remote server.
Port: port number of the remote server.
Path: Path for the POST to the receiving server.
Send json as “form-data”: Enables url encoding for the message sent.
Send images: check to include images in the event sent.
# Sec pre-event: Number of seconds of images before event.
# Sec post-event: Number of seconds of images after event.
Backward compatibility withAI-Dash: Enable this field if you want to receive events compliant with AI-Dash
and not the new AI-Dash PRO (for more details please refer to the custom server notification in the
following).
48 / 153
AI-Dashboard embedded for data management
Event notification
Input by web request
The event notification triggering through web request event_switch.cgi is available for all applications on all
platforms.
Fig. 40: Input by web request
Use event activation/deactivation via web request: activate to manage the input via web request.
Password: Required to avoid fraudulent activation/deactivation.
Events initially enabled: If enabled, the events are initially activated and in presence of web inputs are
inhibited. Otherwise, events are initially inhibited and will be activated in presence of web inputs.
Behaviour: Possible values are: timed or on/off. An on/off input enables/disables the sending of events on
the rising edge. A time input enables / disables the sending of events for a certain time interval, specified by
the "Switch duration" parameter.
EXAMPLE:
disable events (because they are initially enabled) on a device with ip 192.168.1.1 and password «foo». If the
behaviour is Timed, the events will be disabled for Switch duration ms
The plugin also allows to send sequences of HTTP requests, interspersed with a configurable time interval. As
an example, you may think to move a PTZ camera on different presets or create your custom sequence to
drive remote I/O devices. It is possible to configure an unlimited number of requests in the sequence.
Fig. 41: Input by web request
Enable sending events: activate end events via HTTPrequest sequence.
Suspend elaboration during sequence: Enable it to suspend the elaboration during the sequence.
Http(s) URI: The path of the HTTP(s) request.
Time before next item (s): Time interval in seconds to call the next request in the sequence.
################# BINARY JPEG DATA (60155 bytes total) #################
--gc0p4Jq0M2Yt08jU534c0p--
55 / 153
AI-Dashboard embedded for data management
AI-Dash - troubleshooting
Custom server –JSON event format
JSON
field
id_source stringName of the device, specified in the plugin configuration All
event_
type
timestamp stringValue which represents the number of seconds passed since 00:00 of the 1st
sensor_id integer Id associated to the sensor which generated the event All
sensor_
name
mac_
address
dash_id stringAn identifier of the site and the company, specified in the plugin
people_
Value
type
stringType of event. It can assume values: Counting, Aggregate, Crowd, Overcrowd All
stringName associated to the sensor which generated the event All
stringMAC address of the device that generated the event All
integer For Counting events, represents the number of persons crossing
DescriptionType of
events
All
January 1970 UTC (for instance, a Unix timestamp)
All
configuration
All
number
actual_
count
periodinteger For Crowd events, interval between two consecutive events Crowd
integer For Counting events, represents the total number of persons counted by the
simultaneously the sensor. For Aggregate events, represents the current IN-
OUT value. For Crowd and Overcrowd events, represents the number of
estimated persons in the sensor.
sensor since the last reset. For Aggregate events, represents the current IN-
OUT value.
Counting,
Aggregate
AI-Dash - troubleshooting
In case of low bandwidth (e. g. because of huge network-load or undersized systems) or the camera is
overloaded, the live screen may be loading slowly or not to show live. In addition, some browsers may
activate filters that block streaming by default (usually Chrome, Firefox and Safari do not have locks).
In these cases:
n Reloading the page and wait for the live image
n Use a different web-browser
If image is displayed is green try to perform the following operations:
56 / 153
AI-Dashboard embedded for data management
AI-Dash - troubleshooting
n Restart the camer, or alternatively reset to the initial settings (except those related to the application);
n Verify that the latest firmwareis installed on the camera
n Contact technical support (see Support, p. 5)
57 / 153
5
AI-SECURITY
AI-SECURITY is a bundle including three different products, simultaneously installed on
board of your camera.
n AI-INTRUSION: Intrusion detection in sterile zone and virtual line crossing
n AI-LOST: Abandoned or removed objects detection
n AI-LOITERING: Loitering detection in forbidden areas
AI-SECURITY - camera positions
n Make sure the size of the target (person, object, animal, vehicle) is at least 10x10
pixels.
n If necessary, the camera should be mounted with external illuminators, to
distinguish the targets with natural or artificial illumination.
n The camera should be mounted at a height between 3 and 5 meters
The precision of the plugins can be reduced if there are occlusions, waving objects,
n
vehicles which project light in interest areas and any other noise that continuously
modifies the image
59 / 153
AI-SECURITY
AI-INTRUSION
Fig. 42: Camera positions
Intrusion sensors, p. 63
AI-INTRUSION
AI-INTRUSION is a video analytic app that is able to detect intruders in indoor and outdoor environments;
thus, the environmental conditions will affect the performance of the application.
The accuracy to be expected is under ideal environmental and installation conditions
n Recall: 95%
AI-INTRUSION
60 / 153
AI-SECURITY
AI-INTRUSION
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n Absence of stationary or slow-moving people for long periods in the counting area (e.g. Sales people
that encourage customers to enter).
There must be no fog, clouds or other moving objects whose appearance is similar to the target in the
n
areas of interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image.
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
n The people must have a sufficient dissimilarity from the background, i.e. there is no explicit
camouflage, where the people are similar to the background in color and texture. Sufficient
dissimilarity means at least a color difference of at least 5% or a brightness difference of at least 10%.
The target must stay in the interested area for a time of at least 1 second.
n
The target must have a minimum area of 100 pixels.
The target must move at a maximum speed of half their width on the image per frame. For example, a
n
target that is 40 pixels wide at 10 frames per second must move at a speed of no more than 20 pixels
per frame, that is 200 pixel per second.
The scene must be predominantly non-reflective.
n
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
61 / 153
AI-SECURITY
AI-INTRUSION
In case of thermal cameras, the image must be not coloured but in grayscale (white for “hot” pixels,
n
black for “cold” pixels). The camera, thermal or monocular, must be always configured in order to
avoid continuous changes of brightness.
AI-INTRUSION - target size
62 / 153
Intrusion sensors
AI-SECURITY
AI-INTRUSION
63 / 153
AI-SECURITY
AI-INTRUSION
64 / 153
AI-SECURITY
AI-INTRUSION
Fig. 43: Configuration of AI-INTRUSION Intrusion sensors
The configuration section provides the following functions:
Add Sensor: Click this button to draw the area of interest directly on the live image on the left. The area of
interest it’s a polygon with no limits to the number of sides.
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw sensor: Click this button to redraw the current sensor. The current area of interrest will be deleted.
ID sensor: define a numeric IDfor the sensor.
Sensor name: this name uniquely identifies the main counting sensor (green arrow); is used to generate
counting events, sent, for example, to AI-Dash.
Confidence: A small value will make the algorithm very sensitive, instead with a value too large the
algorithm could not generate the alarms.
Inhibition (s): Inhibition time in seconds of the sensor after an alarm has been generated. If an alarm is
generated by the same sensor before the inhibition time is passed, it will be ignored by the system.
Latency alarm (s): Minimum intrusion time (seconds of permanence in the area of interest) before an alarm
is generate. Time in seconds. Subjects who stay in the area of interest for less time than the set latency,
won’t generate any alarm.
Sensor type: there are two types of sensors:
Impulsive: generates a single event for the whole duration of the intrusion.
n
Levels: generates several types of event: beginning of the intrusion, intrusion continuation (every
n
“Inhibition” seconds) and end of intrusion.
End time intrusion: after this amount of seconds, if none is in the level sensor, an event of “end of intrusion”
will be sent.
65 / 153
AI-SECURITY
AI-INTRUSION
Crossing the line
66 / 153
AI-SECURITY
AI-INTRUSION
67 / 153
AI-SECURITY
AI-INTRUSION
Fig. 44: Configuration of AI-INTRUSION Crossing line sensors
The configuration section provides the following functions:
Add Sensor: Click this button to draw the area of interest directly on the live image on the left. The area of
interest it’s a polygon with no limits to the number of sides.
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw sensor: Click this button to redraw the current sensor. The current area of interrest will be deleted.
ID sensor: define a numeric IDfor the sensor.
Sensor name: this name uniquely identifies the sensor, it is used to generate events to be sent for example
to AI-Dash.
Crossing line pre confidence: confidence relative to the object before it crosses the line ( pre alarm).
Crossing line post confidence: confidence relative to the activation of the alarm ( crossing the line) on a
object already considered in a pre-alarm state.
Crossing line pre latency: time of latency of an object that is in the scene before it crosses the line (prealarm). Time in seconds.
Crossing line post latency: time of latency of an object already considered in pre-alarm state that it spends
in the scene after it crosses the line. Time in seconds.
68 / 153
AI-SECURITY
AI-INTRUSION
Multiple crossing lines
A multiple crossing line sensor is an aggregate sensor inside the scene consisting of a set of crossing lines
(see Crossing the line, p. 66). If the subject crosses all the lines specified in the scene, the alarm will be
generated.
69 / 153
AI-SECURITY
AI-INTRUSION
70 / 153
AI-SECURITY
AI-LOITERING
Fig. 45: Configuration of AI-INTRUSION - Multiple crossing line sensors
The configuration section provides the following functions:
Add aggregate sensor: Click this button to draw the area of interest directly on the live image on the left.
The area of interest it’s a polygon with no limits to the number of sides. The aggregate sensor can contain
multiple crossing lines.
Remove aggregate sensor: Click this button to remove the selected aggregate sensor from the
configuration.
ID sensor: define a numeric IDfor the aggregate sensor.
Sensor name: this name uniquely identifies the aggregate sensor, it is used to generate events to be sent for
example to AI-Dash.
Crossing time (s): maximum crossing time in seconds between two successive crossing lines.
It is required to add crossing line sensors within the aggregate sensor (See Crossing the line, p. 66).
AI-LOITERING
AI-LOITERING is a video analytic app that is able to detect loitering in indoor and outdoor environments;
thus, the environmental conditions will affect the performance of the application, FTP servers and third party
servers.
The accuracy to be expected is under ideal environmental and installation conditions
n Recall: 95%
Fig. 46: AI-LOITERING: configuration
71 / 153
AI-SECURITY
AI-LOITERING
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n There must be no fog, clouds or other moving objects whose appearance is similar to the target in the
areas of interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
The target must have a sufficient dissimilarity from the background, i.e. there is no explicit
n
camouflage, where the target is similar to the background in color and texture. Sufficient dissimilarity
means at least a color difference of at least 5% or a brightness difference of at least 10%.
n The target must stay in the interested area for a time of at least 5 seconds.
n The target must have a minimum area of 100 pixels.
n The target must move at a maximum speed of half their width on the image per frame. For example, a
target that is 40 pixels wide at 10 frames per second must move at a speed of no more than 20 pixels
per frame, that is 200 pixel per second.
n The scene must be predominantly non-reflective.
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
72 / 153
AI-SECURITY
AI-LOITERING
n In case of thermal cameras, the image must be not coloured but in grayscale (white for “hot” pixels,
black for “cold” pixels). The camera, thermal or monocular, must be always configured in order to
avoid continuous changes of brightness.
Installation constraints
A camera that can be used to detect loitering with AI-LOITERING must comply with the following installation
restrictions (in addition to the respect of the environmental conditions):
It must be installed in such a way that the framed targets (people, vehicles, animals) have a minimum
n
area of 100 pixels.
If necessary, it must be installed with external illuminators that make it possible to distinguish the
n
targets in all natural or artificial lighting conditions.
73 / 153
AI-SECURITY
AI-LOITERING
Configuration of AI-LOITERING sensors
Fig. 47: Configuration of AI-LOITERING sensors
The configuration section provides the following functions:
Add Sensor: Click this button to draw the area of interest directly on the live image on the left. The area of
interest it’s a polygon with no limits to the number of sides.
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw the sensor: Click to delete the current sensor and draw a new one.
ID sensor: define an ID number for the sensor.
Sensor name: this name uniquely identifies the sensor.
Confidence: A small value will make the algorithm very sensitive, instead with a value too large the
algorithm could not generate the alarms.
Inhibition (s): Inhibition time in seconds of the sensor after an alarm has been generated. If an alarm is
generated by the same sensor before the inhibition time is passed, it will be ignored by the system.
74 / 153
AI-SECURITY
AI-LOST
Latency alarm (s): Minimum intrusion time (seconds of permanence in the area of interest) before an alarm
is generate. Time in seconds. Subjects who stay in the area of interest for less time than the set latency,
won’t generate any alarm.
AI-LOST
AI-LOST is a video analytic app that is able to detect abandoned or removed objects in indoor and outdoor
environments; thus, the environmental conditions will affect the performance of the application.
The accuracy to be expected is under ideal environmental and installation conditions
n Recall: 90%
Fig. 48: AI-LOST: configuration
75 / 153
AI-SECURITY
AI-LOST
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n There must be no fog, clouds or other moving objects whose appearance is similar to the target in the
areas of interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
The target must have a sufficient dissimilarity from the background, i.e. there is no explicit
n
camouflage, where the target is similar to the background in color and texture. Sufficient dissimilarity
means at least a color difference of at least 5% or a brightness difference of at least 10%.
n The target must stay in the interested area for a time of at least 5 seconds.
n The target must have a minimum area of 100 pixels.
n The target must move at a maximum speed of half their width on the image per frame. For example, a
target that is 40 pixels wide at 10 frames per second must move at a speed of no more than 20 pixels
per frame, that is 200 pixel per second.
n The scene must be predominantly non-reflective.
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
76 / 153
AI-SECURITY
AI-LOST
n In case of thermal cameras, the image must be not coloured but in grayscale (white for “hot” pixels,
black for “cold” pixels). The camera, thermal or monocular, must be always configured in order to
avoid continuous changes of brightness.
AI-LOST - target size
Installation constraints
A camera that can be used to detect loitering with AI-LOITERING must comply with the following installation
restrictions (in addition to the respect of the environmental conditions):
It must be installed in such a way that the framed targets (people, vehicles, animals) have a minimum
n
area of 100 pixels.
If necessary, it must be installed with external illuminators that make it possible to distinguish the
n
targets in all natural or artificial lighting conditions.
77 / 153
AI-SECURITY
AI-LOST
Configuration of AI-LOST sensors
Fig. 49: Configuration of AI-LOST sensors
The configuration section provides the following functions:
Add Sensor: Click this button to draw the area of interest directly on the live image on the left. The area of
interest it’s a polygon with no limits to the number of sides.
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw the sensor: Click to delete the current sensor and draw a new one.
ID sensor: define an ID number for the sensor.
Sensor name: this name uniquely identifies the sensor.
Confidence: A small value will make the algorithm very sensitive, instead with a value too large the
algorithm could not generate the alarms.
Inhibition (s): Inhibition time in seconds of the sensor after an alarm has been generated. If an alarm is
generated by the same sensor before the inhibition time is passed, it will be ignored by the system.
78 / 153
AI-SECURITY
AI-LOST
Latency alarm (s): Minimum intrusion time (seconds of permanence in the area of interest) before an alarm
is generate. Time in seconds. Subjects who stay in the area of interest for less time than the set latency,
won’t generate any alarm.
Configuration of AI-LOST Entrance areas
In order to reduce the number of false positives and to consider only the objects which enters from specific
parts of the image, it is possible to draw an unlimited number of entrance areas.
Fig. 50: Configuration of AI-LOST entrance areas
The configuration section provides the following functions:
Add entrance area: Click this button to draw an entrance area of directly on the live image on the left. The
entrance area is a polygon with no limits to the number of sides.
Delete entrance area: Click this button to remove the selected entrance area from the configuration.
79 / 153
6
AI-RETAIL3
AI-RETAIL3 is a bundle including three different products, simultaneously installed on
board of your camera.
n AI-PEOPLE: People counting through gates
n AI-CROWD: Crowd estimation
n AI-OVERCROWD: Overcrowding detection for queue management
AI-RETAIL - camera positions
n The camera should be mounted with a reduced focal length and an horizontal field
of view in the range between 60° and 120°, chosen with respect to the gate.
n The camera must be mounted in a overhead position considering an 90° angle
measured to ground.
n The camera should be mounted at a height between 3 and 5 meters
n The precision of the plugins is maximum when people are recorded from the top
without distorsion on the sides
81 / 153
AI-RETAIL3
AI-PEOPLE
Fig. 51: camera position
Recomended distances
Camera height (m)Maximum gate width (m)
36
3,57,5
49
4,510
512
AI-PEOPLE
AI-PEOPLE is a video analytic plugin optimized to count people crossing a gate in typical retail scenarios. It
generates events that can be managed by all the notification channels.
The accuracy to be expected is under ideal environmental and installation conditions
Indoor:
n Recall: 85%
n Precision: 95%
Outdoor:
n Recall: 85%
n Precision: 85%
82 / 153
AI-RETAIL3
AI-PEOPLE
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n Absence of stationary or slow-moving people for long periods in the counting area (e.g. Sales people
that encourage customers to enter).
n There must be no other moving objects whose appearance is similar to the people in the areas of
interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image.
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
n The people must have a sufficient dissimilarity from the background, i.e. there is no explicit
camouflage, where the people are similar to the background in color and texture. Sufficient
dissimilarity means at least a color difference of at least 5% or a brightness difference of at least 10%.
n The people must have a minimum area of 600 pixels (e.g. 20x30, 15x40, ...).
n The floor must be a predominantly non-reflective surface.
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
83 / 153
AI-RETAIL3
AI-PEOPLE
Drawing the people counting sensor
When drawing the counting sensor the following 3 guidelines must be considered:
n Correct width: It must occupy the entire area of the gate horizontally
n Correct height: The vertical half of the sensor should include head and shoulders
n Correct position: the sensor must be parallel to the gate, so that people crossing it from top to
bottom or viceversa, and must not include moving objects in its area (doors, sliding or not, screens
etc.)
Fig. 52: Examples of correct and wrong sensor drawing
84 / 153
Configuring people counting
AI-RETAIL3
AI-PEOPLE
85 / 153
AI-RETAIL3
AI-PEOPLE
86 / 153
AI-RETAIL3
AI-PEOPLE
Fig. 53: Configuration of AI-PEOPLE
The configuration section provides the following functions:
Reset counters: when checked the counters associated to the counting sensors will resetted when the
application is restarted.
Add Sensor: Click this button to draw a virtual sensor with the mouse method “click and drag”. The sensor
can be moved and changed in its size, by dragging the nodes. You can direct the sensor (the counting
direction is given by the arrow), for example rotating the sensor until the arrow points to the desired
direction, or specify if the sensor is monodirectional rather than bidirectional .
Remove sensor: Click this button to remove the selected sensor from the configuration.
Real width (m): real width of the sensor in meters. The empirical rule to specify this value, used when the
real dimension is not measurable, suggests to compute the maximum number of people who can cross the
gate at the same time multiplied for 0.75. However, it approximates the real condition and may not be
precise enough.
Bidirectional: specify if the sensor is mono or bidirectional.
Sensor name: this name uniquely identifies the main counting sensor (green arrow); is used to generate
counting events, sent, for example, to AI-Dash.
Sensor name (reverse direction): this name uniquely identifies the counting sensor in the reverse direction
(red arrow); is used to generate counting events, sent ,for example, to AI-Dash;
Sensor activation threshold: A value too small (< 0,1) would make the sensor very sensitive and the sensor
could give false positive in this case. A value too big (> 0,6) would make the sensor not very sensitive and in
this case, the sensor could miss some person crossing.
After checking Enable aggregate counting, it’s possible to send the events if the difference between
entrances and exits is over a certain threshold (see AI-PEOPLE: Aggregate counting, p. 88).
87 / 153
AI-RETAIL3
AI-PEOPLE
AI-PEOPLE: Aggregate counting
88 / 153
AI-RETAIL3
Configuring aggregate counting
Before configuring aggregate counting make sure the basic AI-PEOPLE, p. 82 is ready configured.
AI-PEOPLE
Fig. 54: Aggregate counting
After checking Enable aggregate counting, it’s possible to send the events if the difference between
entrances and exits is over a certain threshold
The following parameters are to be configures to use this functionality:
ID sensor: univocal ID automatically generated;
Sensor name: this name uniquely identifies the aggregate sensor; is used to generate counting events, sent,
for example, to AI-Dash;
89 / 153
AI-RETAIL3
AI-CROWD
Threshold: The event will be generated when the difference between entries and excites will be above this
value (threshold).
In section Sensors to aggregateyou can add the desired number of sensors which will form the aggregate
sensor:
Aggregate sensor: drop down menu that permit to select the name of the sensor just created in the section
“Counting” (BE AWARE: if you created a Bidirectional sensor, in the section “Counting”, the generated
sensors will be two with the respective name and identifier);
Sensor type: specifies if the selected sensor in the previous drop down menu counts entries (IN) or exits
(OUT).
AI-CROWD
AI-CROWD is the plugin that can be used in crowded areas where persons can stop or move slowly, even
determining queueing situations. It allows to estimate the number of persons inside one or more areas of
interest. It generates events that can be managed by AI-Dash, FTP servers and third party servers.
The accuracy to be expected is 90% under ideal environmental and installation conditions.
Fig. 55: AI-CROWD: configuration
90 / 153
AI-RETAIL3
AI-CROWD
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n Absence of stationary or slow-moving people for long periods in the counting area (e.g. Sales people
that encourage customers to enter).
n There must be no other moving objects whose appearance is similar to the people in the areas of
interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image.
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
n The people must have a sufficient dissimilarity from the background, i.e. there is no explicit
camouflage, where the people are similar to the background in color and texture. Sufficient
dissimilarity means at least a color difference of at least 5% or a brightness difference of at least 10%.
n The people must have a minimum area of 200 pixels (e.g. 10x20, 5x40, ...).
n The floor must be a predominantly non-reflective surface.
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
91 / 153
AI-RETAIL3
AI-CROWD
Drawing the sensor for AI-CROWD
When drawing the crowd estimation sensor consider the ollowing guideline:
n Configure the minimum area occupied by a person by drawing a rectangle around the shoulders.
Fig. 56: Drawing sensor for AI-CROWD
92 / 153
Configuration of AI-CROWD
AI-RETAIL3
AI-CROWD
93 / 153
AI-RETAIL3
AI-CROWD
94 / 153
AI-RETAIL3
AI-OVERCROWD
Fig. 57: Configuration of AI-CROWD
The configuration section provides the following functions:
Add Sensor: Click this button to draw a virtual sensor with the mouse method “click and drag”. The sensor
can be moved and changed in its size, by dragging the nodes. You can direct the sensor (the counting
direction is given by the arrow), for example rotating the sensor until the arrow points to the desired
direction, or specify if the sensor is monodirectional rather than bidirectional .
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw the sensor: Click to delete the current sensor and draw a new one.
ID sensor: define an ID number for the sensor.
Sensor name: this name uniquely identifies the main counting sensor (green arrow); is used to generate
counting events, sent, for example, to AI-Dash.
Event period(s): interval in seconds between two consecutive events that need to be sent to an external
server.
Enable crowd estimation:check to activate AI-CROWD.
AI-OVERCROWD
AI-OVERCROWD is avideo analytic app that can be used to detect overcrowding inside one or more areas of
interest in typical retail scenarios; of course, the position of the camera and the environmental conditions
will affect the performance of the application.
The accuracy to be expected is 90% under ideal environmental and installation conditions.
Fig. 58: AI-OVERCROWD
95 / 153
AI-RETAIL3
AI-OVERCROWD
Environment conditions
The position of the camera and the environmental conditions affect the performance of the application.
Performance is best under the following conditions:
n The image must not present flickering, severe noise or artifacts.
n Image must have a resolution of 640x360, 640x480, 320x180, 320x240.
n Rotating (PTZ) security cameras are supported only if they are not moved when the application is
enabled. If the camera is moved, the application must be reconfigured.
n Absence of occlusions (E. g. Trees, pillars, buildings, furniture elements etc.) that do not allow to see
the people.
n Absence of conditions of high crowding or stopped people that do not allow to count the individuals.
n Absence of stationary or slow-moving people for long periods in the counting area (e.g. Sales people
that encourage customers to enter).
n There must be no other moving objects whose appearance is similar to the people in the areas of
interest.
n Camera lens must not be dirty, wet or covered in rain or water drops. Camera lens must not be
steamy.
n Absence of "waving objects" (e.g. Meadow with tall grass, trees, sliding doors, etc.) or any other type
of disturbance that causes the continuous modification of the images (moving pixels) in the areas of
interest.
n Camera placement must be stable and solid in a way that wind or external disturbances of other types
will cause movement of the camera that appears on the image.
n Absence of vehicles with lights projected in areas of interest.
n Correct exposition of the camera: camera must not be in backlight, the framed area must not have
heterogeneous illumination, i.e. partially indoor or partially outdoor. In general, no areas to be
monitored must be almost white or almost black, i.e. the dynamic range must be sufficient to
correctly show detail of objects in the image. If necessary, the camera must be installed with external
illuminators that make it possible to distinguish the people in all natural or artificial lighting
conditions.
n The people must have a sufficient dissimilarity from the background, i.e. there is no explicit
camouflage, where the people are similar to the background in color and texture. Sufficient
dissimilarity means at least a color difference of at least 5% or a brightness difference of at least 10%.
n The people must have a minimum area of 200 pixels (e.g. 10x20, 5x40, ...).
n The floor must be a predominantly non-reflective surface.
n No hard lights must be present that cast shadows in a way that the background brightness is reduced
to less than 50% of the original value in the image.
96 / 153
Drawing the sensor for AI-OVERCROWD
When drawing the crowd estimation sensor consider the ollowing guideline:
n Configure the minimum area occupied by a person by drawing a rectangle around the shoulders.
Fig. 59: Drawing sensor for AI-CROWD
AI-RETAIL3
AI-OVERCROWD
97 / 153
AI-RETAIL3
AI-OVERCROWD
Configuration of AI-OVERCROWD
Fig. 60: Configuration of AI-OVERCROWD
The configuration section provides the following functions:
Confidence: A small value (< 0,5) will make the algorithm very sensitive, instead with a value too large (> 0,8)
the algorithm could not generate the alarms. It’s suggested to use a value between 0,5 and 0,75.
Inhibition(s): inhibition time of the sensor in seconds after an alarm has been generated. If an alarm is
generated by the same sensor before the inhibition time is passed, will be ignored by the system.
Latency(s): Minimum crowding time in seconds (number of people over the configured threshold) before an
alarm is generate.
Overcrowd threshold: If the number of the persons in the region of interest exceeds the selected threshold,
the application creates a new overcrowd event.
98 / 153
7
AI-TRAFFIC
AI-TRAFFIC is a bundle including three different products, simultaneously installed on
board of your camera.
n AI-ROAD 3D: gathering of traffic statistics
n AI-INCIDENT: road monitoring for security purposes
99 / 153
AI-TRAFFIC
Configuration of AI-TRAFFIC analysis
Fig. 61: Configuration of AI-LOST sensors
The configuration section provides the following functions:
Add Sensor: Click this button to draw the area of interest directly on the live image on the left. The area of
interest it’s a polygon with no limits to the number of sides.
Remove sensor: Click this button to remove the selected sensor from the configuration.
Redraw the sensor: Click to delete the current sensor and draw a new one.
ID sensor: define an ID number for the sensor.
Sensor name: this name uniquely identifies the sensor.
Enable vehicle counting and classification: It is enabled by default and allows to count and classify
vehicles, collecting also information about the average speed and color of each vehicle. Available in AI-ROAD
3D.
100 / 153
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.