In this document you will get a deep introduction into VirtualLab Fusion. Below you may find the table of contents of our User's Manual. By clicking on the headline or red words you will be forwarded directly to the content in the User's Manual.
A VirtualLab Fusion window consists of the following areas: the ribbon with the Quick Access Toolbar, Docking Tabs at the right and and a status bar. The main area of the user interface is used for document windows. A document window is also denoted as a view. Several types of documents are supported by VirtualLab Fusion, for example optical systems and harmonic fields.
In VirtualLab Fusion there are several document types which store one- or two-dimensional data. The Manipulations ribbon tab contains several operations to modify this (mostly) sampled data which are described in the following sections. Furthermore, it contains Conversions to convert one document type into another
They are accessible via the dialog. It serves for adding, viewing, editing, or removing of catalog entries, or for the selection of an entry. In case of selection, the dialog is called via the controls.
Some types of Optical Building Blocks contain other Building Blocks (e.g. a medium contains materials). In VirtualLab Fusion each Building Block category has its own catalog. The general catalog concept is explained in Part VI.
In VirtualLab Fusion optical systems are defined using the Optical Setup document. It can contain different Optical Setup Elements (e.g. light sources, components, detectors) which are linked to define an execution sequence.
VirtualLab Fusion provides a growing set of source models. Each model describes an electromagnetic field in a plane. The source generation results in a discretized field that is represented by a data array (sampled field) and additional parameters including, e.g., wavelength(s), physical coordinates and others.
Analyzers evaluate an Optical Setup or a single Optical Setup Elementin a special way, independent from the simulation engines described in Sec. 45.3.1.For example the Distortion Analyzer calculates the distor-tion introduced by one Optical Setup Element, whereas the Eigenmode Analyzer calculates the eigenmode of a complete laser re-sonator optical setup.
VirtualLab Fusion offers a large variety of free space propagation operators which can only be used for homogeneous, isotropic media. Real components can contain interfaces and both homogeneous and inhomogeneous media. Thus they need special propagations.
This part of the manual explains the most important file formats which can be imported into / exported from VirtualLab Fusion. Some dialogs allow an import / export to their own specific formats which are explained in the corresponding sections (e.g. snippets).
The benefits of radar-video fusion are more accurate detections and classifications, and less false and missed alarms. The fusion of the two technologies comes together in AXIS Object Analytics, which is the main interface used to access and configure the radar-video fusion.
AXIS Q1656-DLE detects and classifies objects in wide areas with depth, and you can use it for area monitoring or road monitoring. Additionally, AXIS Q1656-DLE works well in a site design combined with other devices. Since the detection range of the radar is larger than the field of view of the camera in AXIS Q1656-DLE, combine it with PTZ cameras with IR illumination to achieve visual confirmation in the entire detection range of the radar. Or combine it with thermal cameras, which can detect and classify objects in long and narrow areas.
The video typically provides more accurate classifications when there is sufficient contrast and when the object is moving close to the camera. It will also provide more granular classifications than the radar. However, a camera needs good lighting conditions to see.
The radar on the other hand can detect objects even in challenging lighting conditions, and its detection and classification range is longer. Regardless of the weather conditions, the radar can measure the speed of a moving object, as well as its direction and the distance to it. However, the lack of visual confirmation can make the radar classifications more fragile. Swaying objects and reflective surfaces can trigger false alarms and must be taken into consideration when designing the site and configuring the radar.
The two technologies in the radar-video fusion camera can of course be used on their own but are more powerful when the analytics from both technologies interact to provide more reliable detections and classifications.
For example, if an object appears at a distance of 50 m (164 ft), it may be too small for the video analytics to detect, but the radar can identify it. In that case, the radar detection is fused into the image plane and can be used to trigger alarms inside AXIS Object Analytics.
Analytics fusion: The radar detections and classifications are fused with the detections and classifications from the video analytics. This gives the device a combined analytics output where the respective strengths of both technologies are merged. It uses the distance and speed from the radar, and the position and class from the video.
When the object in the example above comes closer, the video analytics also detects it. The radar detection is then fused with the video analytics output to produce an output of higher quality, and with more information, than what the technologies can provide separately.
Preview mode is ideal for installers when fine tuning the camera view during the installation. No login is required to access the camera view in preview mode. It is available only in factory defaulted state for a limited time from powering up the device.
This product is intended for monitoring open areas and you can use it either for area monitoring or road monitoring. For installation examples and use cases, see Area installation and Road installation.
Solid and metal objects can affect the performance of the radar in AXIS Q1656-DLE. Most solid objects (such as walls, fences, trees, or large bushes) in the coverage area will create a blind spot (radar shadow) behind them. Metal objects in the field of view cause reflections that affect the ability of the radar to perform classifications. This can lead to ghost tracks and false alarms in the radar stream.
Install the product on a stable pole or a spot on a wall where there are no other objects or installations. Objects within 1 m (3 ft) to the left and right of the product, that reflect radio waves, affect the performance of the radar in AXIS Q1656-DLE.
If you mount more than eight radars or radar-video fusion cameras operating on the 60 GHz frequency band close together, they may interfere with each other. To avoid interference, see Install multiple Axis radar devices.
AXIS Q1656-DLE operates on the 60 GHz frequency band. You can install up to eight Axis radars or radar-video fusion cameras operating on the 60 GHz frequency band close to each other, or facing each other, without causing problems. The built-in coexistence algorithm can find a suitable time slot and frequency channel that will minimize interference.
If an installation contains more than eight radar devices operating on the same frequency band, and many of the devices are pointing away from each other, there is less risk of interference. In general, radar interference will not cause the radar to stop functioning. There is a built-in interference mitigation algorithm that tries to repair the radar signal even when interference is present. A warning about interference is expected to happen in an environment with many radars operating on the same frequency band in the same coexistence zone. The main impact of interference is deterioration of the detection performance, and occasional ghost tracks.
You can combine the radar-video fusion camera with Axis radars operating on another frequency band without having to think about coexistence. Axis radar devices that are operating on different frequency bands will not interfere with each other.
The radar in AXIS Q1656-DLE has a horizontal field of detection of 95. The detection range of the radar depends on factors like the scene, the mounting height and tilt angle of the product, and the size and speed of the moving objects.
The detection range also depends on the monitoring profile you select. You can use AXIS Q1656-DLE for area or road monitoring and there are two profiles in the radar that are optimized for each one of the scenarios:
Area monitoring profile: the radar tracks and classifies humans, vehicles and unknown objects moving at speeds lower than 55 km/h (34 mph). For information about detection range, see Area detection range.
The mounting height of the radar-video fusion camera and the vehicle speed will impact the detection range of the radar. When mounted at an optimal installation height, the radar detects approaching and departing vehicles with a speed accuracy of +/- 2 km/h (1.24 mph) within the following ranges:
When the lens in AXIS Q1656-DLE is zoomed out maximally, objects can get too small to detect for the video analytics. In this scenario, it's likely that objects will be detected by the radar with its wide coverage, but not by the video analytics. If you want to establish visual confirmation in the entire detection range of the radar, you can pair AXIS Q1656-DLE with one or more PTZ cameras.
If two people are walking close together and are detected by the radar, but not the video analytics, they will be classified as one person and only one bounding box will surround them. When they enter the analytics fusion zone and visual confirmation is achieved, they will be accurately classified. The spatial differentiation of the radar in AXIS Q1656-DLE is 3 m(9 ft).
For 180 radar coverage, place two AXIS Q1656-DLE next to each other. When you install more than one pair of radar-video fusion cameras side-by-side, we recommend placing them with 100 m (330 ft) spacing between each pair, as shown in the example.
c80f0f1006