.265 has been the "next big thing" in CODECs for several years, claiming 50% savings over H.264, but camera and VMS support for it remain relatively rare. Additionally, in our tests, H.265 has had limited benefit over

    1. in similar scenes, about 10-15% on average, with H.264 Smart CODEC cameras (see section below) generally providing bigger bandwidth savings than H.265.

 

For example, in our Smart H.265 Samsung Test, H.265 produced ~15-20% lower bitrates than H.264 (with smart CODECs off on both), shown in the chart below. However, using smart CODECs with H.264, bitrates dropped by at least ~40% (daytime still scene) and as much as 90%+.

 

 

 

Note that H.265 is still developing, and will likely become more efficient over time, as H.264 has.

 

Readers should see our H.265 / HEVC Codec Tutorial for more details on

    1. issues, including bitrates, camera support, and VMS integration.

I-Frames vs. P-Frames

 

In inter-frame CODECs, frames which capture the full field of view are called I-frames, while those sending only changes are P-frames. Because they capture a full image, the more I-frames in a stream, the higher the bandwidth.

 

For years, cameras were typically only able to use a fixed I-frame interval, measured either in seconds or frames. Sending too few I-frames could negatively impact imaging, with long "trails" of encoding artifacts, while too many I-frames provides little to no visible benefit, seen in this video from our Test: H.264 I vs P Frame Impact.

 

Note: Click here to watch the I-Frame Intervals video on IPVM

 

 

However, with the introduction of Smart CODECs in the past 1-2 years, cameras are now able to dynamically adjust I-frame interval, instead of using a fixed value. So where a typical 10 FPS camera might be set to send an I-frame every second, a smart CODEC enabled model would extend this when there is no motion in the scene, shown in this example:

 

 

 

Smart CODECs are a complex topic, covered in more detail below and in our Smart CODEC Guide.

Fixed I-frame Interval Effects

 

Though many cameras are smart CODEC enabled and do not use fixed

  1. frame intervals, many (especially older models) do not and users may simply choose not to use them, so it is important to understand the impact of I-frame interval on bandwidth.

 

Reducing the number of I-frames (moving from 1 to 2 to 4 second interval) produces minimal bandwidth reductions, as seen below, despite the severe negative image quality impact.

 

 

 

Inversely, increasing the number of I-frames to more than one per second significantly increased bandwidth, despite the minimal increase in image quality.

 

 

 

For full details on I and P frame impact on bandwidth and image quality see our H.264 I vs P Frame Test.

Smart CODECs

 

One recent development with huge impact on bandwidth is the introduction of smart CODECs. These technologies typically reduce bitrate in two ways:

 

    • Dynamic compression: First, instead of using a single compression level for the whole scene, the camera may apply little compression to moving objects, with higher compression/lower quality on static background areas, since we most often do not need detailed images of still areas of the scene.

    • Dynamic I-frame interval: Second, instead of using a steady I-frame interval, cameras may increase the distance between I-frames when the scene is still, with some extending to very long intervals in our tests, over a minute in some cases. Then, when motion begins, the camera immediately generates an I-frame and reduces interval to previous levels.

 

Some smart CODECs may use other methods as well, such as dynamic framerates (used by Axis/Avigilon), increased/improved digital noise reduction (Panasonic Smart Coding), and others.

 

Exact methods used by each smart CODEC and their effectiveness vary. However, in general, bitrates in still scenes were reduced by 50-75% in our tests, with over 95% possible.

 

As an example, in our test of Zipstream 2, bitrates dropped by ~99% in still scenes using dynamic compression, I-frame interval, and FPS:

 

 

 

For more details, see our Smart CODEC Guide.

 

Camera Field of View

 

Field of view's impact on bandwidth varies depending on which width reveals more complex details of the scene. In scenes with large areas of moving objects, such as trees or other blowing vegetation, widening the field of view will likely increase bandwidth. In scenes with relatively low movement but repetitive backgrounds, such as parking lots, roofing, patterned carpet or walls, etc., narrowing the field of view will increase bandwidth due to more of these fine details being discernible.

 

For example, in the park shown below, increasing the field of view results in a ~60% increase in bandwidth due to more moving foliage and shadows in the scene compared to the narrower field of view.

 

 

However, in a busy intersection/parking lot, bandwidth decreases by over 50% in the cameras below when widening the field of view. In the narrower FOV, more details of buildings are visible, and the repetitive pattern of the asphalt parking lot may be seen as well, making the scene more difficult to encode.

 

 

 

For further details of field of view's impact on bandwidth, see our Advanced Camera Bandwidth Test.

 

Low Light

 

Compared to day time, low light bitrates were an average of nearly 500% higher (seen below). This is mainly caused by increased digital noise caused by high levels of gain.

 

 

However, two key improvements are increasingly used to reduce this:

 

    • Digital noise reduction techniques have improved in recent years, greatly reducing these spikes on many cameras.

    • Increased use of integrated IR cameras results in smaller spikes at night. Compared to nearly 500% in day/night models, integrated IR cameras increased by an average of 176% due to IR illumination (seen below).

 

 

 

For full details of low light's impact on bandwidth, see our Bandwidth vs Low Light test report.

 

Wide Dynamic Performance

 

WDR's impact on bitrate varies depending on the camera and the scene. Again taking examples from our Advanced Camera Bandwidth Test, when switching WDR on in an Axis WDR in an outdoor intersection scene, bandwidth increases, as more details are visible (beneath the eaves of buildings, in the treeline, etc.).

 

 

 

However, looking at an outdoor track and sports field, bandwidth decreases. In this case, the Q1604 increases contrast slightly on some areas

of the image, such as the trees and bleachers in the center/left of the FOV. Because of this, these areas are more similarly colored and easier to compress, lowering bitrate.

 

 

 

Note that for other cameras, these results may vary, depending on how well they handle light and dark areas, how they handle contrast when WDR is turned on, and more.

 

Sharpness

 

Sharpness has a huge impact on bandwidth consumption, yet it is rarely considered during configuration, even by experienced

technicians. Oversharpening reveals more fine (though rarely practically useful) details of the scene, such as carpet and fabric patterns, edges of leaves and blades of grass, etc. Because more detail is shown, bandwidth increases.

 

For example, in the FOV below (from our Advanced Camera Bandwidth Test), bitrate increases by nearly 600% from minimum to maximum sharpness in the Dahua camera, and almost 300% in the Axis Q1604.

 

 

 

 

Color vs. Monochrome

 

At practical levels (without desaturation or oversaturation effects), color has minimal impact on bandwidth. In the examples below, moving from default color settings to monochrome decreases bandwidth by 20 Kb/s, about an 8% decrease.

 

However, oversaturation may result in abnormally high bandwidth. In this example, bandwidth increases by over 200% when changing color settings from default to their highest level, which also creates oversaturation effects such as color bleeding (seen in the red chair).

 

 

One practical example of a manufacture desaturating their video to 'save' bandwidth is Arecont Bandwidth Savings Mode (which we tested here).

 

Manufacturer Model Differences

 

Across specific models in a given manufacturer's line, significant differences in bitrate may occur, despite the cameras using the same resolution and framerate. This may be due to different image sensors or processors being used, different default settings in each model, better or worse low light performance, or any number of other factors.

 

For example, the following image shows two cameras, an Axis Q1604 and Axis M3004, both 720p, 10 fps, set to a ~20' horizontal FOV, at compression of ~Q28. Despite these factors being standardized, in this well lit indoor scene, the Q1604's bitrate was 488 Kb/s while the M3004 consumed 1.32 Mb/s, nearly 3x the bandwidth.

 

 

 

Beware: model differences have become more extreme in some cases, as some cameras support Smart CODECs while others in the same line may not.

 

Measure Your Own Cameras

 

As this guide shows, there are few easy, safe rules for estimating bandwidth (and therefore) storage, abstractly. Too many factors impact it,

and some of them are driven by impossible to know factors within the camera.

 

Though it is important to understand which factors impact bandwidth, use this knowledge with your own measurements of the cameras you plan to deploy. This will ensure the most accurate estimates and planning for deployments.

       

All Rights Reserved @ PROGEN Co.LTD. 2000-2022