Axis Object Analytics FAQ

FREQUENTLY ASKED QUESTIONS
AXIS Object Analytics
June 2022
Frequently asked questions
Q: What is the main purpose of AXIS Object Analytics A: AXIS Object Analytics is an edge-based video analytics application designed to detect and classify
humans and vehicles in a camera’s field of view. Because it can classify objects detected, you can
select if you want to detect humans, vehicles or both within the specified scenario that has been configured.
Q: Who is AXIS Object Analytics for? A: It is best suited for users who primarily want an alarm/event triggered based on user-configuration
and applied detection scenarios. Customer receives little or no value from buying a more expensive use case critical analytics.
Q: How do I activate the license for AXIS Object Analytics? A: When running AXIS Object Analytics for the first time, you won’t be needing a license since it’s
value-added analytics. AXIS Object Analytics is free of charge and preinstalled in firmware on compatible cameras. Once the firmware has been upgraded, the analytics will be available to use. Cameras with a machine learning processing unit (MLPU) must have firmware 10.2 (or higher) and cameras with a deep learning processing unit (DLPU) must have firmware 10.3 (or higher).
Q: What is the difference between AXIS Object Analytics machine
learning and deep learning?
A: Depending on if AXIS Object Analytics is running on a camera with a machine learning processing
unit (MLPU) or a camera with a deep learning processing unit (DLPU), its detection and classification capabilities vary.
AXIS Object Analytics running on a camera with a machine learning processing unit (MLPU):
- Classifies humans and vehicles
- Considerations: Humans and vehicles should look like humans and vehicles AXIS Object Analytics running on a camera with a deep learning processing unit (DLPU):
- Classifies humans, vehicles as well as different types of vehicles including cars, trucks, buses, and bikes (motorcycle/bicycle)
- Manages more challenging scenes (more crowded and busy scenes, objects in more challenging positions)
Q: Is it possible to run AXIS Object Analytics simultaneously with
other applications on a camera?
A: Running multiple ACAPs simultaneously on a camera may affect performance. We therefore
strongly recommend to only run AXIS Object Analytics and disable all other applications.
Q: What is the difference between AXIS Guard Suite and
AXIS Object Analytics
A: AXIS Guard Suite encompasses three separate license-free analytics that provide motion, intrusion,
and loitering detection for your network video products: AXIS Fence Guard, AXIS Motion Guard and AXIS Loitering Guard. Currently, these applications are offered on all cameras that can’t support AXIS Object Analytics. These applications are all motion-based, so they react to most movement of pixels within the designated area. Regardless of what the object is, the motion of the pixels will trigger an alarm.
AXIS Object Analytics uses an object detection engine to classify the motion which means that the analytics can determine if the moving object is a human or vehicle and disregard motion that is caused by other irrelevant moving objects.
AXIS Object Analytics is offered preinstalled at no extra cost on compatible Axis network cameras with firmware 10.2 and higher. Although the same cameras also support AXIS Guard Suite, we recommend using AXIS Object Analytics. AXIS Guard Suite is offered free of charge on all cameras with firmware 7.1 and higher.
Q: How long does it take for the detection to initialize? A: The object must initially be moving within the camera’s field of view (the scene) for the analytics to
be able to detect and classify. Depending on the scene, its conditions and object visibility, the amount of time it takes for detection to initialize can vary. Although detection could initialize sooner, our general recommendation is that an object needs to be fully visible and moving for at least 2 seconds.
Q: Does Axis Object Analytics support older Axis products? A: No, it relies on a specific hardware component that is built into certain cameras.
Q: Will I see the metadata overlay in the recorded and live screen of
the analytics when there is a trigger?
A: Yes, you will see the burnt in metadata overlay provided that the following steps have been taken:
- Burnt in metadata overlay on the desired resolution has been selected/enabled.
- The correct streaming profile has been selected, as per above resolution.
- You have selected that the recording takes place based on the trigger and the correct resolution
profile.
When the Metadata overlay has been selected on desired resolution, a rectangle around triggering objects becomes visible. The rectangle is only visible in live views and recordings of the selected resolution. This feature applies red boxes for people and blue boxes for vehicles in the Object in area and Line crossing scenarios. Enabling Time in area (beta) applies yellow boxes along with a timer to the objects. When running AXIS Object Analytics on DLPU cameras, this feature also applies resemblance icons for each type of classified object.
Q: Can I change the colors of objects and alarming area? A: No, today we have red for humans and blue for vehicles and the alarming zone is outlined in red.
Enabling Time in area (beta) will display dwelling objects with yellow boxes along with a timer until the condition has been fulfilled.
Q: Why do I see double bounding boxes within the analytics? A: When the metadata overlay is enabled and you enter the configuration page of a scenario, you will
notice visual confirmation being displayed as well. To resolve this, you will need to disable the metadata overlay within the settings and then return to the configuration page. Please note that this will also remove any metadata overlay that is being burnt into your live and recorded views.
Q: How do I configure different scenarios? A: Please read the user manual on how to configure different scenarios.
Q: Why does AXIS Object Analytics trigger on small objects and can
I minimize it?
A: If there are triggers on small objects, we recommend that you configure the perspective calibration
within the application’s settings.
Q: Can AXIS Object Analytics run on PTZ cameras, and does it
support pre-sets?
A: Yes. If the PTZ cameras starts to move, AXIS Object Analytics will suspend itself until the final
position is reached and then wait 5 seconds until further detections take place. When using guard tour with AXIS Object Analytics, it is recommended that the timer between each position is extended to ensure enough time for everything to initialize and start detecting.
Q: Can AXIS Object Analytics run on multi-sensor cameras? A: The number of channels AXIS Object Analytics can run on is camera dependent. For an up-to-date
list of compatible cameras, please visit the product page.
Q: Does AXIS Object Analytics work on fisheye cameras, and can I
change the view?
A: Yes, AXIS Object Analytics works on supported fisheye cameras. It will however only work on the
overview image of the camera.
Q: Does AXIS Object Analytics count objects? A: No, AXIS Object Analytics does not currently count objects.
Q: Do I need a VMS to run AXIS Object Analytics or does it work as
a stand-alone application?
A: It is recommended to use the application with a VMS in most use cases. It is compatible with AXIS
Camera Station, Genetec and Milestone VMS. In general, all VMS that can receive Axis events from cameras are compatible. Nevertheless, we do recommend running a test if the target VMS is not any of the three listed above. The application can be used without a VMS using the event engine function of the camera to take a snapshot, send an e-mail, etc.
Q: How well will AXIS Object Analytics work in low light and dark
scenes?
A: Each scene will require its own considerations as each scene has its own challenges such as
environmental or climatic. Having a good contrast of the object and having the object look like a human and vehicle in low light scenes will allow the analytic to be more accurate.
Q: What is the minimum amount of light/lux level for AXIS Object
Analytics?
A: Our minimum recommended light level is 50 lux.
Q: What is the minimum object size for humans and vehicles
detection?
A: For human detection: for a standing person to be detected at the recommended maximum
detection distance, the pixel height of the object should be at least 8% of the total image height. For example, if the height of the video stream is 1080 pixels, the height of a person standing at the end of the detection zone should be at least 86 pixels for human detection.
For vehicle detection: the pixel height of the object should be at least 6% of the total image height. For example, if the height of the video stream is 1080 pixels, the height of a vehicle at the end of the detection zone should be at least 64 pixels for vehicle detection.
A minimum object size visualization tool is built into the application to help determine if live objects
are large enough to be detected as of firmware 10.11.
Maximum detection distance varies depending on camera. Numbers above are for guidance only. To see what applies to specific cameras and plan camera placement and coverage with maps, visit
AXIS Site Designer.
Q: Which browsers will be supported in the configuration of Axis
Object Analytics?
A: Chrome, Firefox and Edge (Chromium based). Q: Can I train AXIS Object Analytics, or does it learn on its own?
A: No, AXIS Object Analytics is trained and maintained by Axis. This is done through firmware
upgrades to cameras that are compatible with the analytics.
Q: What cameras are compatible with AXIS Object Analytics?
A: For a complete list of compatible cameras, please click here.
Loading...