The document discusses network video and image usability. It begins by explaining the transition from analogue to network video systems and how these systems allow for real-time monitoring and predictive capabilities. It then covers resolutions like HDTV and how pixel density rather than resolution determines image quality for network video. Finally, it shows examples of bandwidth and storage needs at different resolutions.
SpotCam is a capable Wi-Fi camera for your homespotcam
SpotCam is a new entrant in the cloud-based home webcam parade. The U.S. $149.99 camera has easy setup and features like sound monitoring, plus the ability to speak to the camera remotely and have the camera speaker play it.
SpotCam is a capable Wi-Fi camera for your homespotcam
SpotCam is a new entrant in the cloud-based home webcam parade. The U.S. $149.99 camera has easy setup and features like sound monitoring, plus the ability to speak to the camera remotely and have the camera speaker play it.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/06/develop-next-gen-camera-apps-using-snapdragon-computer-vision-technologies-a-presentation-from-qualcomm/
Judd Heape, VP of Product Management for Camera, Computer Vision and Video Technology at Qualcomm Technologies, presents the “Develop Next-gen Camera Apps Using Snapdragon Computer Vision Technologies” tutorial at the May 2023 Embedded Vision Summit.
The Qualcomm Snapdragon mobile platform powers the world’s best smartphones, XR headsets, PCs, wearables, automobiles and IoT products. These devices leverage the latest computer vision technologies that power Snapdragon’s ISP, AR/VR perception pipeline and advanced video capture features.
In this talk, Heape uses real-world examples—with a focus on AR/VR products—to explore how Snapdragon developers harness these computer vision technologies to enable advanced use cases with premium features, performance boosts and power savings. He also shows how developers use the Snapdragon computer vision SDKs and their camera-centric APIs to tap Snapdragon’s amazing hardware computer vision technologies to create next-generation immersive applications.
Step Into Security Webinar - IP Security Camera Techniques for Video Surveill...Keith Harris
LENSEC's Step Into Security webinar covers techniques for IP security cameras used for video surveillance. This information is useful for security personnel and anyone who works with security cameras.
Physical security expert Keith Harris is a veteran in the security industry and has worked with cameras for 30 years. Keith provides information on IP security camera techniques for physical security applications.
Webinar Agenda:
•Camera Choice
•Lens Selection
•Power
•Record Capability
•Lighting
•Transmission
You can find this and other webinars covering physical security and life safety topics on LENSEC's website: http://bit.ly/StepIntoSecurityWebinarArchive
Share this info with your colleagues and invite them to join us.
Design in Motion: Video Production Workflowgoodfriday
Creating high quality video is a combination of art and science. Learn the tips from the pros on how to optimize video compression to deliver the best quality at the smallest sizes with Expression Media Encoder, a feature of Microsoft Expression Media.
In the world of security cameras, 18x zoom can be equal to 36x. More specifically, a high-resolution security camera with 18x optical zoom can provide images that, for surveillance purposes, are just as, or even more useful than those delivered by a standard resolution, 4CIF camera with twice the zoom capability.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, VP of Marketing at videantis, presents the "Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs" tutorial at the May 2017 Embedded Vision Summit.
360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first glance, these systems wouldn’t seem require computer vision since they’re simply presenting images that the cameras capture. But even relatively simple 360-degree video systems require computer vision techniques to geometrically align the cameras – both in the factory and while in use. Additionally, differences in illumination between the cameras cause color and brightness mismatches, which must be addressed when combining images from different cameras.
Computer vision also comes into play when rendering the captured 360-degree video. For example, some simple automotive systems simply provide a top-down view, but more sophisticated systems enable the driver to select the desired viewpoint. In this talk, Jacobs explores the challenges, trade-offs and lessons learned while developing 360-degree video systems, with a focus on the crucial role that computer vision plays in these systems.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/06/develop-next-gen-camera-apps-using-snapdragon-computer-vision-technologies-a-presentation-from-qualcomm/
Judd Heape, VP of Product Management for Camera, Computer Vision and Video Technology at Qualcomm Technologies, presents the “Develop Next-gen Camera Apps Using Snapdragon Computer Vision Technologies” tutorial at the May 2023 Embedded Vision Summit.
The Qualcomm Snapdragon mobile platform powers the world’s best smartphones, XR headsets, PCs, wearables, automobiles and IoT products. These devices leverage the latest computer vision technologies that power Snapdragon’s ISP, AR/VR perception pipeline and advanced video capture features.
In this talk, Heape uses real-world examples—with a focus on AR/VR products—to explore how Snapdragon developers harness these computer vision technologies to enable advanced use cases with premium features, performance boosts and power savings. He also shows how developers use the Snapdragon computer vision SDKs and their camera-centric APIs to tap Snapdragon’s amazing hardware computer vision technologies to create next-generation immersive applications.
Step Into Security Webinar - IP Security Camera Techniques for Video Surveill...Keith Harris
LENSEC's Step Into Security webinar covers techniques for IP security cameras used for video surveillance. This information is useful for security personnel and anyone who works with security cameras.
Physical security expert Keith Harris is a veteran in the security industry and has worked with cameras for 30 years. Keith provides information on IP security camera techniques for physical security applications.
Webinar Agenda:
•Camera Choice
•Lens Selection
•Power
•Record Capability
•Lighting
•Transmission
You can find this and other webinars covering physical security and life safety topics on LENSEC's website: http://bit.ly/StepIntoSecurityWebinarArchive
Share this info with your colleagues and invite them to join us.
Design in Motion: Video Production Workflowgoodfriday
Creating high quality video is a combination of art and science. Learn the tips from the pros on how to optimize video compression to deliver the best quality at the smallest sizes with Expression Media Encoder, a feature of Microsoft Expression Media.
In the world of security cameras, 18x zoom can be equal to 36x. More specifically, a high-resolution security camera with 18x optical zoom can provide images that, for surveillance purposes, are just as, or even more useful than those delivered by a standard resolution, 4CIF camera with twice the zoom capability.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, VP of Marketing at videantis, presents the "Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs" tutorial at the May 2017 Embedded Vision Summit.
360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first glance, these systems wouldn’t seem require computer vision since they’re simply presenting images that the cameras capture. But even relatively simple 360-degree video systems require computer vision techniques to geometrically align the cameras – both in the factory and while in use. Additionally, differences in illumination between the cameras cause color and brightness mismatches, which must be addressed when combining images from different cameras.
Computer vision also comes into play when rendering the captured 360-degree video. For example, some simple automotive systems simply provide a top-down view, but more sophisticated systems enable the driver to select the desired viewpoint. In this talk, Jacobs explores the challenges, trade-offs and lessons learned while developing 360-degree video systems, with a focus on the crucial role that computer vision plays in these systems.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
2. www.axis.com
F
Image Usability - Timetable
TIME DESTINATION
00:01 Analogue to Network Video
00:03 Resolution - HDTV
00:04 How Many Pixels?
00:10 Pixel Density v Resolution
00:12 Corridor Format
00:13 Light
00:14 More Than Security
9. www.axis.com
What Level of Detail is Required?
There are three main quality levels normally used to determine the image detail required
for a camera:
Detection:
To detect the presence of a person in the image, without needing to see
their face.
Recognition:
To recognise somebody you know, or determine that somebody is not known to
you.
Identification:
To record high quality facial images which can be used in court to prove
someone’s identity beyond reasonable doubt.
10. www.axis.com
Historically with analogue systems these levels were defined using a percentage (%) of
the screen height:
Detection:
What Level of Detail is Required - Analogue?
10%
13. www.axis.com
For Network Video systems we need a measurement system that is:
Consistent for all camera resolutions
Consistent for all specifiers, manufacturers etc.
Easy to use and define
What Level of Detail is Required – Network Video?
Analogue Network Video (IP)
Detection 10% 25 pix/m
Recognition 50% 125 pix/m
Identification 120% 250 pix/m (500pix/m)
Horizontal Pixels per Metre defines image quality
EN50132-7: CCTV Surveillance Systems for use in Security Applications - Part 7
20. www.axis.com
You will always get a better pixel density from a higher resolution camera.
True or False?
It depends on the Field of View (FoV).
If the FoV is the same, the higher resolution camera gives a better pixel density.
If the FoV is different, a lower resolution camera may provide a better pixel density.
Pixel Density v Camera Resolution
25. www.axis.com
For Network Video systems we need a measurement system that is:
Consistent for all camera resolutions
Consistent for all specifiers, manufacturers etc.
Easy to use and define
What Level of Detail is Required – Network Video?
Analogue Network Video (IP)
Detection 10% 25 pix/m
Recognition 50% 125 pix/m
Identification 120% 250 pix/m (500pix/m)
Horizontal Pixels per Metre defines image quality
EN50132-7: CCTV Surveillance Systems for use in Security Applications - Part 7
32. www.axis.com
You will always get a better pixel density from a higher resolution camera.
True or False?
It depends on the Field of View (FoV).
If the FoV is the same, the higher resolution camera gives a better pixel density.
If the FoV is different, a lower resolution camera may provide a better pixel density.
Pixel Density v Camera Resolution