HD IP CCTV Teknik İnceleme ve Sonuçları
HD IP CCTV Teknik İnceleme ve Sonuçları
Hoşgeldiniz!
Bu kitap, yüzlerce farklı gözetleme kamerasında 10,000+ saatlik testin bulgularını temsil etmektedir. Bu dersleri aldık, özetledik ve ilgili konuları ve değişimleri aktarmak için düzinelerce görüntü gösterdik.
IPVM, dünyanın tek bağımsız video gözetim test ve araştırma kuruluşudur. 100'den fazla ülkede 10.000'den fazla üyenin küçük ödemeleriyle desteklenen herhangi bir reklam veya sponsorluğu kabul etmiyoruz.
Bu kitabın sizi eğitmenize yardımcı olduğunu, video gözetimi seçerken ve kullanırken sizi daha iyi bir hale getireceğini umuyoruz.
Teknik inceleme sonuçlarını alt başlıklarda görebilirsiniz
NetVISION Kayıt Kapasitesi-Gün Hesabı | |||||||||
Disk Kapasitesi | Kayıt Çözünürlüğü | Sıkıştırma Formatı / Bit Rate | 4 Kanal | 8 Kanal | 16 Kanal | ||||
H.264 | H.265 | H.264 | H.265 | H.264 | H.265 | H.264 | H.265 | ||
Gün | Gün | Gün | Gün | Gün | Gün | ||||
1 TB | 4K | 8192Kbps | 4096Kbps | 3,0 | 5.9 | 1.5 | 3,0 | 0,7 | 1,5 |
4MP | 6020Kbps | 3010Kbps | 4,0 | 8.1 | 2.0 | 4,0 | 1,0 | 2,0 | |
1080P | 4096Kbps | 2048Kbps | 5.9 | 11.9 | 3.0 | 5,9 | 1,5 | 3,0 | |
720P | 2048Kbps | 1024Kbps | 11.9 | / | 5.9 | / | 3,0 | / | |
2TB | 4K | 8192Kbps | 4096Kbps | 5,9 | 11,9 | 3.0 | 5,9 | 1,5 | 3,0 |
4MP | 6020Kbps | 3010Kbps | 8,1 | 16,1 | 4,0 | 8,1 | 2,0 | 4,0 | |
1080P | 4096Kbps | 2048Kbps | 11,9 | 23,7 | 5,9 | 11,9 | 3,0 | 5,9 | |
720P | 2048Kbps | 1024Kbps | 23,7 | / | 11,9 | / | 5,9 | / | |
3TB | 4K | 8192Kbps | 4096Kbps | 8,9 | 17,8 | 4,4 | 8,9 | 2,2 | 4,4 |
4MP | 6020Kbps | 3010Kbps | 12,1 | 24,2 | 6,0 | 12,1 | 3,0 | 6,0 | |
1080P | 4096Kbps | 2048Kbps | 17,8 | 35,6 | 8,9 | 17,8 | 4,4 | 8,9 | |
720P | 2048Kbps | 1024Kbps | 35,6 | / | 17,8 | / | 8,9 | / | |
4TB | 4K | 8192Kbps | 4096Kbps | 11,9 | 23,7 | 5,9 | 11,9 | 3,0 | 5,9 |
4MP | 6020Kbps | 3010Kbps | 16,1 | 32,3 | 8,1 | 16,1 | 4,0 | 8,1 | |
1080P | 4096Kbps | 2048Kbps | 23,7 | 47,4 | 11,9 | 23,7 | 5,9 | 11,9 | |
720P | 2048Kbps | 1024Kbps | 47,4 | 23,7 | / | 11,9 | / | ||
6TB | 4K | 8192Kbps | 4096Kbps | 17,8 | 35,6 | 8,9 | 17,8 | 4,4 | 8,9 |
4MP | 6020Kbps | 3010Kbps | 24,2 | 48,4 | 12,1 | 24,2 | 6,0 | 12,1 | |
1080P | 4096Kbps | 2048Kbps | 35,6 | 71,7 | 17,8 | 35,6 | 8,9 | 17,8 | |
720P | 2048Kbps | 1024Kbps | 71,7 | / | 35,6 | / | 17,8 | / |
.265 has been the "next big thing" in CODECs for several years, claiming 50% savings over H.264, but camera and VMS support for it remain relatively rare. Additionally, in our tests, H.265 has had limited benefit over
-
in similar scenes, about 10-15% on average, with H.264 Smart CODEC cameras (see section below) generally providing bigger bandwidth savings than H.265.
For example, in our Smart H.265 Samsung Test, H.265 produced ~15-20% lower bitrates than H.264 (with smart CODECs off on both), shown in the chart below. However, using smart CODECs with H.264, bitrates dropped by at least ~40% (daytime still scene) and as much as 90%+.
Note that H.265 is still developing, and will likely become more efficient over time, as H.264 has.
Readers should see our H.265 / HEVC Codec Tutorial for more details on
-
issues, including bitrates, camera support, and VMS integration.
I-Frames vs. P-Frames
In inter-frame CODECs, frames which capture the full field of view are called I-frames, while those sending only changes are P-frames. Because they capture a full image, the more I-frames in a stream, the higher the bandwidth.
For years, cameras were typically only able to use a fixed I-frame interval, measured either in seconds or frames. Sending too few I-frames could negatively impact imaging, with long "trails" of encoding artifacts, while too many I-frames provides little to no visible benefit, seen in this video from our Test: H.264 I vs P Frame Impact.
Note: Click here to watch the I-Frame Intervals video on IPVM
However, with the introduction of Smart CODECs in the past 1-2 years, cameras are now able to dynamically adjust I-frame interval, instead of using a fixed value. So where a typical 10 FPS camera might be set to send an I-frame every second, a smart CODEC enabled model would extend this when there is no motion in the scene, shown in this example:
Smart CODECs are a complex topic, covered in more detail below and in our Smart CODEC Guide.
Fixed I-frame Interval Effects
Though many cameras are smart CODEC enabled and do not use fixed
-
frame intervals, many (especially older models) do not and users may simply choose not to use them, so it is important to understand the impact of I-frame interval on bandwidth.
Reducing the number of I-frames (moving from 1 to 2 to 4 second interval) produces minimal bandwidth reductions, as seen below, despite the severe negative image quality impact.
Inversely, increasing the number of I-frames to more than one per second significantly increased bandwidth, despite the minimal increase in image quality.
For full details on I and P frame impact on bandwidth and image quality see our H.264 I vs P Frame Test.
Smart CODECs
One recent development with huge impact on bandwidth is the introduction of smart CODECs. These technologies typically reduce bitrate in two ways:
-
Dynamic compression: First, instead of using a single compression level for the whole scene, the camera may apply little compression to moving objects, with higher compression/lower quality on static background areas, since we most often do not need detailed images of still areas of the scene.
-
Dynamic I-frame interval: Second, instead of using a steady I-frame interval, cameras may increase the distance between I-frames when the scene is still, with some extending to very long intervals in our tests, over a minute in some cases. Then, when motion begins, the camera immediately generates an I-frame and reduces interval to previous levels.
Some smart CODECs may use other methods as well, such as dynamic framerates (used by Axis/Avigilon), increased/improved digital noise reduction (Panasonic Smart Coding), and others.
Exact methods used by each smart CODEC and their effectiveness vary. However, in general, bitrates in still scenes were reduced by 50-75% in our tests, with over 95% possible.
As an example, in our test of Zipstream 2, bitrates dropped by ~99% in still scenes using dynamic compression, I-frame interval, and FPS:
For more details, see our Smart CODEC Guide.
Camera Field of View
Field of view's impact on bandwidth varies depending on which width reveals more complex details of the scene. In scenes with large areas of moving objects, such as trees or other blowing vegetation, widening the field of view will likely increase bandwidth. In scenes with relatively low movement but repetitive backgrounds, such as parking lots, roofing, patterned carpet or walls, etc., narrowing the field of view will increase bandwidth due to more of these fine details being discernible.
For example, in the park shown below, increasing the field of view results in a ~60% increase in bandwidth due to more moving foliage and shadows in the scene compared to the narrower field of view.
However, in a busy intersection/parking lot, bandwidth decreases by over 50% in the cameras below when widening the field of view. In the narrower FOV, more details of buildings are visible, and the repetitive pattern of the asphalt parking lot may be seen as well, making the scene more difficult to encode.
For further details of field of view's impact on bandwidth, see our Advanced Camera Bandwidth Test.
Low Light
Compared to day time, low light bitrates were an average of nearly 500% higher (seen below). This is mainly caused by increased digital noise caused by high levels of gain.
However, two key improvements are increasingly used to reduce this:
-
Digital noise reduction techniques have improved in recent years, greatly reducing these spikes on many cameras.
-
Increased use of integrated IR cameras results in smaller spikes at night. Compared to nearly 500% in day/night models, integrated IR cameras increased by an average of 176% due to IR illumination (seen below).
For full details of low light's impact on bandwidth, see our Bandwidth vs Low Light test report.
Wide Dynamic Performance
WDR's impact on bitrate varies depending on the camera and the scene. Again taking examples from our Advanced Camera Bandwidth Test, when switching WDR on in an Axis WDR in an outdoor intersection scene, bandwidth increases, as more details are visible (beneath the eaves of buildings, in the treeline, etc.).
However, looking at an outdoor track and sports field, bandwidth decreases. In this case, the Q1604 increases contrast slightly on some areas
of the image, such as the trees and bleachers in the center/left of the FOV. Because of this, these areas are more similarly colored and easier to compress, lowering bitrate.
Note that for other cameras, these results may vary, depending on how well they handle light and dark areas, how they handle contrast when WDR is turned on, and more.
Sharpness
Sharpness has a huge impact on bandwidth consumption, yet it is rarely considered during configuration, even by experienced
technicians. Oversharpening reveals more fine (though rarely practically useful) details of the scene, such as carpet and fabric patterns, edges of leaves and blades of grass, etc. Because more detail is shown, bandwidth increases.
For example, in the FOV below (from our Advanced Camera Bandwidth Test), bitrate increases by nearly 600% from minimum to maximum sharpness in the Dahua camera, and almost 300% in the Axis Q1604.
Color vs. Monochrome
At practical levels (without desaturation or oversaturation effects), color has minimal impact on bandwidth. In the examples below, moving from default color settings to monochrome decreases bandwidth by 20 Kb/s, about an 8% decrease.
However, oversaturation may result in abnormally high bandwidth. In this example, bandwidth increases by over 200% when changing color settings from default to their highest level, which also creates oversaturation effects such as color bleeding (seen in the red chair).
One practical example of a manufacture desaturating their video to 'save' bandwidth is Arecont Bandwidth Savings Mode (which we tested here).
Manufacturer Model Differences
Across specific models in a given manufacturer's line, significant differences in bitrate may occur, despite the cameras using the same resolution and framerate. This may be due to different image sensors or processors being used, different default settings in each model, better or worse low light performance, or any number of other factors.
For example, the following image shows two cameras, an Axis Q1604 and Axis M3004, both 720p, 10 fps, set to a ~20' horizontal FOV, at compression of ~Q28. Despite these factors being standardized, in this well lit indoor scene, the Q1604's bitrate was 488 Kb/s while the M3004 consumed 1.32 Mb/s, nearly 3x the bandwidth.
Beware: model differences have become more extreme in some cases, as some cameras support Smart CODECs while others in the same line may not.
Measure Your Own Cameras
As this guide shows, there are few easy, safe rules for estimating bandwidth (and therefore) storage, abstractly. Too many factors impact it,
and some of them are driven by impossible to know factors within the camera.
Though it is important to understand which factors impact bandwidth, use this knowledge with your own measurements of the cameras you plan to deploy. This will ensure the most accurate estimates and planning for deployments.
As a precursor, you need to know the speed of objects, most typically people.
Speed of People
The faster a person moves, the more likely you are to miss an action. You know the 'speed' of frame rate - 1 frame per second, 10 frames per second, 30, etc., but how many frames do you need for reliable capture?
Here's how fast people move.
For a person walking, a leisurely, ordinary pace is ~4 feet per second, covering this 20 foot wide FoV in ~5 seconds:
Note: Click here to watch the demo on IPVM
For a person running, our subject goes through the 20' FOV in ~1.25 seconds, meaning he covers ~16' in one second:
Note: Click here to watch the demo on IPVM
For example, if you only have 1 frame per second, a person can easily move 4 to 16 feet in that time frame. We need to keep this in mind when evaluating frame rate selection.
We cover:
-
What speed do people move at and how does that compare to frame rates.
-
Walking: What risks do you have capturing a person walking at 1, 10 and 30fps.
-
Running: What do you have capturing a person running at 1, 10 and 30fps.
-
Head Turning: How many more clear head shots do you get of a person at 1, 10 and 30fps.
-
Playing Cards: What do you miss capturing card dealing at 1, 10 and 30fps.
-
Shutter speed vs Frame Rate: How are these two related?
-
Bandwidth vs Frame Rate: How much does bandwidth rise with increases in frame rate?
-
Average Frame Rates used: What is the industry average?
Walking Examples
As our subject walks through the FOV, we view how far he moves from one frame to the next. In 30 and 10 fps streams, he does not complete a full stride. However, in the 1fps example, he has progressed ~4' between frames, which falls in line with our measured walking speed of ~4' a second.
Note: Click here to watch the comparisons on IPVM
Running Examples
With our subject sprinting through the FOV, the 30 fps stream still catches him mid stride, while in the 10 fps stream, he has traveled ~1' between frames. In the 1 fps example, only one frame of the subject is captured, with him clearing the rest of the FOV between frames, with only his back foot visible in the second frame.
Note: Click here to watch the comparisons on IPVM
Capturing Faces
Trying to get a clear face shot can be difficult when people move because they naturally shift their head frequently. In this demonstration, we had the subject shake their head back and forth walking down a hallway to show the difference frame rate plays.
Take a look:
Note: Click here to watch the demo on IPVM
Notice, at 1fps, only a single clear head shot is captured, but at 10fps, you get many more. Finally, at 30fps, you may get one or two more, but it is not much of an improvement.
Playing Cards
In this test, our subject dealt a series of playing cards from ace to five with the camera set to default shutter speed (1/30).
In the 30 and 10 fps examples, we can see each card as it is removed from the top of the deck and placed on the table. However, in the 1 fps example, we see only the cards appearing on the table, not the motions of the dealer, as frame rate is too low.
Note: Click here to view the comparison samples on IPVM
Shutter Speed vs Frame Rate
Frame rate does not cause blurring. This is a misconception. The camera's automatic shutter speed control does.
Dealing cards ace through 5 again, we raised the camera's minimum shutter speed to 1/4000 of a second. The image below compares the motion blur in the dealers hand and card, with the 2 card much more easily legible in the fast shutter speed example.
1/4000s shutter speed completely eliminated all traces of motion blur. 1/1000 and 1/2000 of a second shutter speeds significantly reduces blur, but it was still noticeable around the dealers fingers and edges of the cards when looking at the recordings frame-by-frame.
If you have blurring, you have shutter speed configuration problem, not a frame rate one.
Slow Shutter and Frame Rate
On the other side, sometimes users want or camera manufacturers default their maximum shutter to a rate slower than the frame rate (e.g., a 1/4s shutter for a 1/30s camera). Not only does this cause blurring of moving objects, you lose frames.
Key lesson: The frame rate per second can never be higher than the number of exposures per second. If you have a 1/4s shutter, the shutter / exposure only opens and closes 4 times per second (i.e., 1/4s + 1/4s + 1/4s
+ 1/4s = 1s). Since this only happens 4 times, you can only have 4 frames in that second.
Some manufacturers fake frames with slow shutter, simply copying the same frame over and over again. For example, if you have 1/15s shutter, you can only have 15 exposures and, therefore, 15 frames. To make it seem like you have 30 frames, each frame can be sent twice in a row.
Be careful with slow shutter. Beyond blur, you can either lose frames or waste storage.
Bandwidth vs Frame Rate
Frame rate impacts bandwidth, but for modern codecs, like H.264, it is less than linear. So if you increase frame rate by 10x, the increase in bandwidth is likely to be far less, often only 3 to 5 times more bandwidth. This is something we see mistaken regularly in the industry.
The reason for this is inter-frame compression, that reduces bandwidth needs for parts of scenes that remain the same across frames (for more on inter and intra frame compression, see our CODEC tutorial).
Illustrating this point further, we took 30, 10 and 1 fps measurements to demonstrate the change in bit rate in a controlled setting in our conference room. The average bitrates were as follows:
-
1 fps was 0.179 Mb/s
-
10 fps, with 10x more frames, consumed 4x more bandwidth than 1 fps (0.693 Mb/s)
-
30 fps, with 3x more frames, consumed double the bandwidth of 10fps and, with 30x the frames, 7x the bandwidth of 1fps (1.299 Mb/s)
These measurements were done with 1 I frame per second, the most common setting in professional video surveillance (for more on this, see: Test: H.264 I vs P Frame Impact).
For more on this, see our reports testing bandwidth vs frame rate and 30 vs 60 fps.
Average Frame Rates Used
Average industry frame rate is ~10fps, reflecting that this level provides enough frames to capture most actions granularly while minimizing storage costs.
As shown in the previous section, going from 10fps to 30fps can double storage costs but only marginally improve details captured.
For more commentary on why integrators choose the frame rates hey do, see the Average Frame Rate Used Statistics report.
Bandwidth
Bandwidth is one of the most fundamental, complex and overlooked aspects of video surveillance.
Many simply assume it is a linear function of resolution and frame rate. Not only is that wrong, it misses a number of other critical elements and failing to consider these issues could result in overloaded networks or shorter storage duration than expected.
We take a look at these factors, broken down into fundamental topics common between cameras, and practical performance/field issues which vary depending on camera performance, install location, and more.
Fundamental Issues
-
Resolution: Does doubling pixels double bandwidth?
-
Framerate: Is 30 FPS triple the bandwidth of 10 FPS?
-
Compression: How do compression levels impact bandwidth?
-
CODEC: How does CODEC choice impact bandwidth?
-
Smart CODECs: How do these new technologies impact bandwidth?
Practical Performance/Field Issues
-
Scene complexity: How much do objects in the FOV impact bitrate?
-
Field of view: Do wider views mean more bandwidth?
-
Low light: How do low lux levels impact bandwidth?
-
WDR: Is bitrate higher with WDR on or off?
-
Sharpness: How does this oft-forgotten setting impact bitrate?
-
Color: How much does color impact bandwidth?
-
Manufacturer model performance: Same manufacturer, same resolution, same FPS. Same bitrate?
Scene Complexity
The most basic commonly missed element is scene complexity. Contrast the 'simple' indoor room to the 'complex' parking lot:
Even if everything else is equal (same camera, same settings), the 'complex' parking lot routinely requires 300%+ more bandwidth than the 'simple' indoor room because there is more activity and more details. Additionally, scene complexity may change by time of day, season of the year, weather, and other factors, making it even more difficult to fairly assess.
We look at this issue in our Advanced Camera Bandwidth Test.
Resolution
On average, a linear relationship exists between pixel count (1MP, 2MP, etc.) and bandwidth. So for example, if a 1MP camera uses 1 Mb/s of bandwidth, a 2MP camera on average might use ~2Mb/s.
However, variations across manufacturers and models are significant. In IPVM testing, some cameras increase at a far less than linear level (e.g., just 60% more bandwidth for 100% more pixels) while others rose at far greater than linear (e.g., over 200% more bandwidth for 100% more pixels). There
were no obvious drivers / factors that distinguished why models differed in their rate of increase.
As a rule of thumb, a 1x ratio may be used when estimating bandwidth difference across resolution. However, we strongly recommend measurements of actual cameras as such a rule of thumb may be off by a lot.
Frame Rate
Frame rate impacts bandwidth, but for inter-frame CODECs such as H.264, it is less than linear. So if you increase frame rate by 10x, the increase in bandwidth is likely to be far less, often only 3 to 5 times more bandwidth. Illustrating this, we took 30, 10, and 1 fps measurements to demonstrate the change in bit rate in a controlled setting in our conference room. The average bitrates were as follows:
1 fps: 0.179 Mb/s
-
10 fps: 0.693 Mb/s (10x the frames of 1 fps, but only 4x bandwidth)
-
30 fps: 1.299 Mb/s (3x the frames of 10 fps, but only double bandwidth. 30x frames of 1 fps, but only 7x bandwidth)
(These measurements were done at 1 I frame per second with quantization standardized ~28.)
For more detail on frame rate's impact on bitrate, see our Frame Rate Guide for Video Surveillance.
Compression
Compression, also known as quantization, has an inverse relationship to bandwidth: the higher the compression, the lower bandwidth will be.
CODECs
A key differentiation across CODECs is supporting inter-frames (e.g., H.264, H.265) vs intra-frame only (e.g., MJPEG, JPEG2000).
-
Inter-frame CODECs such as H.264/265 not only compress similar pixels in an image, they reference previous frames and transmit only
the changes in the scene from frame to frame, potentially a large bandwidth savings. For example, if a subject moves through an empty hallway, only the pixels displaying him change between frames and are transmitted, while the static background is not.
-
Intra-frame only CODECs encode each individual frame as an image, compressing similar pixels to reduce bitrate. This results in higher bandwidth as each frame must be re-encoded fully, regardless of any activity in the scene.
For more on inter and intra frame compression, see our CODEC tutorial.
The vast majority of cameras in use today, and for the past several years,
use H.264, due to its bandwidth advantages over MPEG-4 and Motion JPEG.
While 1080p, 4MP, 4K, and other resolutions remain in common use in 2018, there are some notable changes in camera resolution in the past year.
-
3MP/5MP confusion: Historically, users have known 3 and 5MP resolutions as 4:3 aspect ratio (2048x1536 and 2560x1944, respectively). But now, cameras using 16:9 variants of these resolutions are available, delivering increased horizontal PPF, but reducing height of the coverage area, which may eliminate areas visible when using 4:3 cameras.
-
10MP uncommon: Though it used to be one of the most common "high" resolutions, 10MP has practically fallen out of use in 2017/2018.
-
6MP available: Finally, 6MP cameras are now readily available, due to new generations of sensors using this resolution. 6MP uses an odd (for surveillance) 3:2 aspect ratio.
720p cameras, once most popular by a wide margin, have sharply declined as higher resolution options have come down in price and several manufacturers offer fewer new models in this resolution compared to higher.
Resolution Vs. Cost
Everything else equal, higher resolution cameras generally cost more than lower cost models, though pricing for some 4K cameras have started to decline in 2017. Higher prices are due in part to simple increases in component costs adding up, such as more expensive image sensors, additional processors required, higher resolving power lenses, etc.
However, note that this higher cost does not always result in higher performance, as advanced features such as super low light and true
WDR are not always supported or as high performing in higher resolution models, or requiring a significant increase in cost. For example, 1080p cameras most commonly offer strong WDR and super low light options, with such features becoming less common in higher resolution cameras.
Sensor Resolution vs. Stream Resolution
While manufacturers typically specify cameras based on the resolution (i.e. pixel count) of the sensor, sometimes, the resolution of the stream sent can be less. This happens in multiple cases:
-
Limited camera capabilities: In some cases, manufacturers may use readily available sensors of one resolution but crop the sensor to a lower pixel count due to limitations in processing at full resolution. For example, a 6MP sensor may be cropped to 5MP in order to stream at higher frame rates or apply WDR or higher gain levels.
-
Panoramic cameras: Second, manufacturers often crop unused portions of the sensor from panoramic camera streams, so a "12MP" fisheye model may actually stream at 8-9MP. See our report Beware Imager vs Stream Resolution for more information on this issue.
-
Reduced in software: Finally, an installer may explicitly or mistakenly set a camera to a lower resolution. Some times this is done to save bandwidth but other times it is simply an error or glitch in the VMS default resolution configuration. Either way, many times an HD resolution may look ‘terrible’ but the issue is simply that it is not set to its max stream resolution (i.e., a 3MP camera set to 640 x 480).
Because of these issues, users should be sure to check not only the resolution of the sensor but the stream resolutions supported and used, typically found lower down the camera's datasheet:
Compression Impact On Resolution
Because resolution most often simply means pixel count, no consideration is given to how much pixels are compressed. Each pixel is assigned a value
to represent its color, typically out of a range of ~16 million (24 bits), creating a huge amount of data. For instance, a 1080p/30fps uncompressed stream is over 1Gb/s. However, surveillance video is compressed, with that 1080p/30fps stream more typically recorded at 1Mb/s to 8Mb/s — 1/100th to 1/1000th less than the uncompressed stream. The only question — and it is a big one — how much is the video compressed?
Picking the right compression level can be tricky. How much compression loss can be tolerated often depends on subjective preferences of viewers or the details of the scene being captured. Equally important, increasing compression can result in great savings on hard drive costs (less storage required for similar durations), server configuration (less CPU required is required to store less bandwidth), and switches (copper gigabit switches may be used instead of fiber 10GbE).
Just because two cameras have the same resolution (i.e. pixel counts), the visible image quality could vary substantially because of differences in compression levels chosen. Here is an example:
For full coverage of these details, see our video quality / compression tutorial.
Also important for considering compression is that manufacturers default compression settings vary significantly, for more see: IP Camera Manufacturer Compression Comparison.
Angle Of Incidence Is Key
Regardless of how high quality an image is, it needs to be at a proper an gle to 'see' details of a subject, as cameras cannot see through walls nor people. For instance:
Even if the image on the left had 10x the pixels as the one on the right, the left one is incapable of seeing the full facial details of the subject as he is simply not facing the camera.
This is frequently a practical problem in trying to cover a full parking lot with a single super high-resolution camera. Even if you can get the 'right'
number of pixels, if a car is driving opposite or perpendicular to the camera, you may not have any chance of getting its license plate (similarly for a person's face).
Resolution Overkill
Historically, surveillance has been starved for resolution, with almost always too little for its needs. Anyone familiar with suspect photos on their local news can see this:
However, as the amount of pixels has increased to 1080p and beyond, the opposite issue presents itself: unnecessarily high resolution for the scene. Once you have enough to capture facial and license plates details, most users get little practical benefit from more pixels. The image might look 'nicer' but the evidentiary quality remains the same. This is a major consideration when looking at PPF calculations and ensuring that you do not 'waste' pixels.
Additional Factors Impacting Resolution
Finally, note that beyond issues discussed above, many other factors impact surveillance resolution beyond pixels, including:
Do not accept specified resolution (i.e. pixel count) as the one and only quality metric as it will result in great problems. Understand and factor in all of these drivers to obtain the highest quality for your applications.
- Üst Kategori: Teknoloji Bilgilendirme
Resolution – Pixel Count
Now, with IP, manufacturers do not even attempt to measure performance. Instead, resolution has been redefined as counting the number of physical pixels that an image sensor has.
For example, a 1080p resolution camera is commonly described as having 2MP (million pixel) resolution because the sensor used has ~2 million pixels on it (technically usually 2,073,600 pixels as that is the product of 1920 horizontal x 1080 vertical pixels). The image of an imager below shows this example:
Pixels Determine Potential, Not Quality
Pixels are a necessary, but not sufficient, factor for capturing details. Without a minimum number of pixels for a given area / target, it is impossible. See our tutorial on why Pixels Determine Potential, Not Quality.
Limitations
The presumption is that more pixels, much like higher line counts, delivers higher ‘quality’. However, this is far from certain.
Just like with classic resolution measurements that used only ideal lighting conditions, measuring pixels alone ignores the impact of common real world surveillance lighting challenges. Often, but not always, having many more pixels can result in poorer resolving power in low light conditions.
Plus, cameras with lower pixel counts but superior image processing can deliver higher quality images in bright sunlight / WDR scenes.
Nonetheless, pixels have become a cornerstone of specifying IP video surveillance. Despite its limitations, you should:
-
Recognize that when a surveillance professional is talking about resolution, they are almost certainly referring to pixel count, not resolving power
-
Understand the different resolution options available
Common Surveillance Resolutions
The table below summarizes the most common resolutions used in production video surveillance deployments today. Note that VGA is no longer common except in thermal cameras, but is included here for reference of what 'standard definition' refers to.