1) The document describes algorithms for creating extended focused images from digital holograms of 3D objects. It involves using focus measures and depth from focus techniques on multiple hologram reconstructions to generate a depth map and then composite the reconstructions into a single in-focus image.
2) Two approaches for the extended focused image are presented - a pointwise approach that selects pixels from individual reconstructions, and a neighborhood approach that averages blocks of pixels.
3) Preliminary results demonstrate extended focused images generated with both approaches, though the neighborhood method produces smoother results by reducing errors.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/person-re-identification-and-tracking-at-the-edge-challenges-and-techniques-a-presentation-from-the-university-of-auckland/
Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, presents the “Person Re-Identification and Tracking at the Edge: Challenges and Techniques” tutorial at the May 2021 Embedded Vision Summit.
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging.
In this talk, Biglari-Abhari discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
Depth estimation do we need to throw old things awayNAVER Engineering
발표의 개요 : Human visual system 기반의 CNN for depth estimation과 CNN inspired by conventional methods
Case1: Cross-channel stereo matching
Case2: Depth from light field
Case3: Multiview stereo
Conclusion
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/person-re-identification-and-tracking-at-the-edge-challenges-and-techniques-a-presentation-from-the-university-of-auckland/
Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, presents the “Person Re-Identification and Tracking at the Edge: Challenges and Techniques” tutorial at the May 2021 Embedded Vision Summit.
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging.
In this talk, Biglari-Abhari discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
Depth estimation do we need to throw old things awayNAVER Engineering
발표의 개요 : Human visual system 기반의 CNN for depth estimation과 CNN inspired by conventional methods
Case1: Cross-channel stereo matching
Case2: Depth from light field
Case3: Multiview stereo
Conclusion
In recent years due to advancement in video and image editing tools
it has become increasingly easy to modify the multimedia content. The
doctored videos are very difficult to identify through visual
examination as artifacts left behind by processing steps are subtle
and cannot be easily captured visually. Therefore, the integrity of
digital videos can no longer be taken for granted and these are not
readily acceptable as a proof-of-evidence in court-of-law. Hence,
identifying the authenticity of videos has become an important field
of information security.
In this thesis work, we present a novel approach to detect and
temporally localize video inpainting forgery based on optical flow
consistency. The proposed algorithm comprises of two stages. In the
first step, we detect if the given video is inpainted or authentic and
in the second step we perform temporal localization. Towards this, we
first compute the optical flow between frames. Further, we analyze the
goodness of fit of chi-square values obtained from optical flow
histograms using a Guassian mixture model. A threshold is then applied
to classify between authentic and inpainted videos. In the next step,
we extract Transition Probability Matrices (TPMs) by modelling the
optical flow as first order Markov process. SVM based classification
is then applied on the obtained TPM features to decide whether a block
of non-overlapping frames is authentic or inpainted thus obtaining
temporal localization. In order to evaluate the robustness of the
proposed algorithm, we perform the experiments against two popular and
efficient inpainting techniques. We test our algorithm on public
datasets like PETS and SULFA. The results show that the approach is
effective against the inpainting techniques. In addition, it detects
and localizes the inpainted frames in a video with high accuracy and
low false positives.
this slide is used to know how image processing is done and applications of image processing and its advantages in various sectors .And also some research topics related to image processing
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMSoma Boubou
This is the slides for a paper presented in ICDM workshop in Vancouver-Canada 2011.
In the paper we describe a Camshift implementation on mobile robotic system for tracking and pursuing a moving person with a monocular camera. Camshift algorithm uses color distribution information to track moving object. It is computationally efficient for working in real-time applications and robust to image noise. It can deal well with illumination changes, shadows and irregular objects motion (linear/non-linear). We compared the Camshift with a HSV color based tracking and our results show that the Camshift method outperformed the HSV color based tracking. Moreover, the former method is much more robust against different illumination conditions.
Paper link:
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6137446&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6137446
Initial Introduction of Image processing is included in these slides which contain 1. Introduction of Image Processing
2.Elements of visual perception
3. Image sensing and Quantization
4.A simple image formation model
5.Basic concept of Sampling and Quantization
Reader will find it easy to understand the topics described here in slides . A detailed description of each topic illustrated here.
Please read and if you like do comments also.... Thanks
Modeling perceptual similarity and shift invariance in deep networksNAVER Engineering
Abstract: While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification have been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.
Despite their strong transfer performance, deep convolutional representations surprisingly lack a basic low-level property -- shift-invariance, as small input shifts or translations can cause drastic changes in the output. Commonly used downsampling methods, such as max-pooling, strided-convolution, and average-pooling, ignore the sampling theorem. The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling. However, simply inserting this module into deep networks degrades performance; as a result, it is seldomly used today. We show that when integrated correctly, it is compatible with existing architectural components, such as max-pooling and strided-convolution. We observe increased accuracy in ImageNet classification, across several commonly-used architectures, such as ResNet, DenseNet, and MobileNet, indicating effective regularization. Furthermore, we observe better generalization, in terms of stability and robustness to input corruptions. Our results demonstrate that this classical signal processing technique has been undeservingly overlooked in modern deep networks.
Holographic Projection Technology COMPLETE DETAILS NEW PPT Abin Baby
This seminar examines the new technology of Holographic Projections. It
highlights the importance and need of this technology and how it represents the new
wave in the future of technology and communications, the different application of the
technology, the fields of life it will dramatically affect including business, education,
telecommunication and healthcare. The paper also discusses the future of holographic
technology and how it will prevail in the coming years highlighting how it will also
affect and reshape many other fields of life, technologies and businesses.
Holography is a diffraction-based coherent imaging technique in which a
complex three-dimensional object can be reproduced from a flat, two-dimensional
screen with a complex transparency representing amp litude and phase values. It is
commonly agreed that real-time holography is the ne plus ultra art and science of
visualizing fast temporally changing 3-D scenes. The integration of the real-time or
electro-holographic principle into display technology is o ne of the most promising but
also challenging developments for the future consumer display and TV market. Only
holography allows the reconstruction of natural-looking 3-D scenes, and therefore
provides observers with a completely comfortable viewing experience. But to date
several challenges have prevented the technology from becoming commercialized. But
those obstacles are now starting to be overcome. Recently, we have developed a novel
approach to real-time display holography by combining an overlapping sub-hologram
technique with a tracked viewing-window technology.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
In recent years due to advancement in video and image editing tools
it has become increasingly easy to modify the multimedia content. The
doctored videos are very difficult to identify through visual
examination as artifacts left behind by processing steps are subtle
and cannot be easily captured visually. Therefore, the integrity of
digital videos can no longer be taken for granted and these are not
readily acceptable as a proof-of-evidence in court-of-law. Hence,
identifying the authenticity of videos has become an important field
of information security.
In this thesis work, we present a novel approach to detect and
temporally localize video inpainting forgery based on optical flow
consistency. The proposed algorithm comprises of two stages. In the
first step, we detect if the given video is inpainted or authentic and
in the second step we perform temporal localization. Towards this, we
first compute the optical flow between frames. Further, we analyze the
goodness of fit of chi-square values obtained from optical flow
histograms using a Guassian mixture model. A threshold is then applied
to classify between authentic and inpainted videos. In the next step,
we extract Transition Probability Matrices (TPMs) by modelling the
optical flow as first order Markov process. SVM based classification
is then applied on the obtained TPM features to decide whether a block
of non-overlapping frames is authentic or inpainted thus obtaining
temporal localization. In order to evaluate the robustness of the
proposed algorithm, we perform the experiments against two popular and
efficient inpainting techniques. We test our algorithm on public
datasets like PETS and SULFA. The results show that the approach is
effective against the inpainting techniques. In addition, it detects
and localizes the inpainted frames in a video with high accuracy and
low false positives.
this slide is used to know how image processing is done and applications of image processing and its advantages in various sectors .And also some research topics related to image processing
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMSoma Boubou
This is the slides for a paper presented in ICDM workshop in Vancouver-Canada 2011.
In the paper we describe a Camshift implementation on mobile robotic system for tracking and pursuing a moving person with a monocular camera. Camshift algorithm uses color distribution information to track moving object. It is computationally efficient for working in real-time applications and robust to image noise. It can deal well with illumination changes, shadows and irregular objects motion (linear/non-linear). We compared the Camshift with a HSV color based tracking and our results show that the Camshift method outperformed the HSV color based tracking. Moreover, the former method is much more robust against different illumination conditions.
Paper link:
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6137446&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6137446
Initial Introduction of Image processing is included in these slides which contain 1. Introduction of Image Processing
2.Elements of visual perception
3. Image sensing and Quantization
4.A simple image formation model
5.Basic concept of Sampling and Quantization
Reader will find it easy to understand the topics described here in slides . A detailed description of each topic illustrated here.
Please read and if you like do comments also.... Thanks
Modeling perceptual similarity and shift invariance in deep networksNAVER Engineering
Abstract: While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification have been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.
Despite their strong transfer performance, deep convolutional representations surprisingly lack a basic low-level property -- shift-invariance, as small input shifts or translations can cause drastic changes in the output. Commonly used downsampling methods, such as max-pooling, strided-convolution, and average-pooling, ignore the sampling theorem. The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling. However, simply inserting this module into deep networks degrades performance; as a result, it is seldomly used today. We show that when integrated correctly, it is compatible with existing architectural components, such as max-pooling and strided-convolution. We observe increased accuracy in ImageNet classification, across several commonly-used architectures, such as ResNet, DenseNet, and MobileNet, indicating effective regularization. Furthermore, we observe better generalization, in terms of stability and robustness to input corruptions. Our results demonstrate that this classical signal processing technique has been undeservingly overlooked in modern deep networks.
Holographic Projection Technology COMPLETE DETAILS NEW PPT Abin Baby
This seminar examines the new technology of Holographic Projections. It
highlights the importance and need of this technology and how it represents the new
wave in the future of technology and communications, the different application of the
technology, the fields of life it will dramatically affect including business, education,
telecommunication and healthcare. The paper also discusses the future of holographic
technology and how it will prevail in the coming years highlighting how it will also
affect and reshape many other fields of life, technologies and businesses.
Holography is a diffraction-based coherent imaging technique in which a
complex three-dimensional object can be reproduced from a flat, two-dimensional
screen with a complex transparency representing amp litude and phase values. It is
commonly agreed that real-time holography is the ne plus ultra art and science of
visualizing fast temporally changing 3-D scenes. The integration of the real-time or
electro-holographic principle into display technology is o ne of the most promising but
also challenging developments for the future consumer display and TV market. Only
holography allows the reconstruction of natural-looking 3-D scenes, and therefore
provides observers with a completely comfortable viewing experience. But to date
several challenges have prevented the technology from becoming commercialized. But
those obstacles are now starting to be overcome. Recently, we have developed a novel
approach to real-time display holography by combining an overlapping sub-hologram
technique with a tracked viewing-window technology.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
Image processing concepts are widely used in medical fields. Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot of researchers are working on the field analysis and processing of multi-dimensional images. Work previously hasn’t sufficient to stop them, so they continue performance work is due by the researcher. In this paper we contribute a novel research work for analysis and performance improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image processing. The CRA algorithms have better response from researcher to use them.
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGcscpconf
Recently the progress in technology and flourishing applications open up new forecast and defy
for the image and video processing community. Compared to still images, video sequences
afford more information about how objects and scenarios change over time. Quality of video is
very significant before applying it to any kind of processing techniques. This paper deals with
two major problems in video processing they are noise reduction and object segmentation on
video frames. The segmentation of objects is performed using foreground segmentation based
and fuzzy c-means clustering segmentation is compared with the proposed method Improvised
fuzzy c – means segmentation based on color. This was applied in the video frame to segment
various objects in the current frame. The proposed technique is a powerful method for image
segmentation and it works for both single and multiple feature data with spatial information.
The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.
Efficient Method of Removing the Noise using High Dynamic Range Imagerahulmonikasharma
Various tone mapping methods have been proposed to make the image better concurrent to human visual observation. In general, tone mapping can also be carried in nearby and/or world features. In this work, a progressive tone mapping framework such as wavelet filter and spongy thresholding is proposed to diminish the noise this 4% quicker than may shrink process. It's one of a kind curve centered universal tone mapping methods that increase the bright and darkish regions. Peak Signal Noise Ratio (PSNR) value is calculated to know the enrichment value. Simulation outcome shows that the proposed schemes achieve high contrast improvement.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Focused Image Creation Algorithms for digital holography
1. Focused Image Creation Algorithms for Digital Holograms of Macroscopic Three-Dimensional Objects Conor Mc Elhinney, Bryan M. Hennelly, Thomas J. Naughton Tuesday 18th March DH and Three-Dimensional Imaging -- 18th March 2008
20. Why digital holography? Using digital holography we can record a scene in a complex valued data structure which retains some of the scene's 3D information. A standard image obtained with a camera records a 2D focused image of the scene from one perspective. Reconstructions Why do we need image processing? However reconstructing a digital hologram returns a 2D image of the scene at a specific depth (300mm from the camera) from an individual perspective (along the optical axis). Algorithms and processing techniques need to be developed to extract the 3D information from digital holograms by processing multiple (volumes of) reconstructions. Image Processing Depth Map DH and Three-Dimensional Imaging -- 18th March 2008
21. Why not 2D Image Processsing? 2D Standard 2D image processing techniques can be applied to individual digital holographic reconstructions with varying success. 2D Image Processing 3D Digital Holographic Image Processing However, we are interested in developing the field of digital holographic image processing (DHIP) where we use volumes of reconstructions to extract 3D information from digital holograms. Using this information we can develop techniques which are more accurate than standard 2D approaches. Reconstructions DH and Three-Dimensional Imaging -- 18th March 2008
22. Reconstructing with digital holography Digital Hologram Digital Reconstruction Discrete Fresnel Transform Distance d DH and Three-Dimensional Imaging -- 18th March 2008
23. Reconstructing with digital holography Digital Hologram Digital Reconstruction d1 Discrete Fresnel Transform d2 d3 d4 d5 d6 Set of distances {d1,d2,d3,d4,d5,d6} DH and Three-Dimensional Imaging -- 18th March 2008
24. Numerical focusing of digital holograms Holograms can be numerically reconstructed at an arbitrary depth away from the camera. DH and Three-Dimensional Imaging -- 18th March 2008
36. One function which has been shown to be both a sound focus measure and successfully applicable to reconstructions from digital holograms is variance: DH and Three-Dimensional Imaging -- 18th March 2008
37. Focus Detection Image 2 Image 4 Image 6 Image 7 Image 10 variance Image Number DH and Three-Dimensional Imaging -- 18th March 2008
48. What is Depth-From-Focus? Depth-From-Focus is an image processing technique which is used to determine the depth of a scene or a region within a scene through processing images taken at different focal depths. Why is this applicable to digital holography? Digital Holograms can be numerically reconstructed at an arbitrary depth. These numerical reconstructions are each at a different focal plane, which make them a good input to a Depth-From-Focus algorithm. What do we get from Depth-From-Focus? We can then create depth maps of the scene, segment the scene and create extended focused images of the scene. DH and Three-Dimensional Imaging -- 18th March 2008
49. What is Depth-From-Focus? Depth-From-Focus is an image processing technique which is used to determine the depth of a scene or a region within a scene through processing images taken at different focal depths. Why is this applicable to digital holography? Digital Holograms can be numerically reconstructed at an arbitrary depth. These numerical reconstructions are each at a different focal plane, which make them a good input to a Depth-From-Focus algorithm. What do we get from Depth-From-Focus? We can then create depth maps of the scene, segment the scene and create extended focused images of the scene. DH and Three-Dimensional Imaging -- 18th March 2008
50. What is Depth-From-Focus? Depth-From-Focus is an image processing technique which is used to determine the depth of a scene or a region within a scene through processing images taken at different focal depths. Why is this applicable to digital holography? Digital Holograms can be numerically reconstructed at an arbitrary depth. These numerical reconstructions are each at a different focal plane, which make them a good input to a Depth-From-Focus algorithm. What do we get from Depth-From-Focus? We can then create depth maps of the scene, segment the scene and create extended focused images of the scene. DH and Three-Dimensional Imaging -- 18th March 2008
51. n n How to compute a depth map To compute a depth map we first take a reconstruction and a block size of [n x n]. We then calculate our focus measure on the first block in the top left corner of the reconstruction We then process every block in the reconstruction by raster scanning the reconstruction and processing every block with our focus measure. We store the output value from each block in its corresponding position in a focus map. DH and Three-Dimensional Imaging -- 18th March 2008
57. Smaller block sizes: finer object features but high error in the estimate of the general shape.Object 7x7 43x43 81x81 121x121 151x151 DH and Three-Dimensional Imaging -- 18th March 2008
58.
59. We intend to extend our algorithm to automatically determine what depth resolution to use in the experiment (the distance between successive reconstructions).DH and Three-Dimensional Imaging -- 18th March 2008
69. What is an Extended Focused Image? A disadvantage of holographic reconstructions is the limited depth of field. For a reconstruction at depth d only object points that are located at distance d from the camera are in focus. Why do we want to create an extended focused image? This means that reconstructions can contain large blurry regions. Using our depth maps and the volume of reconstructions used to create them we can create an extended focused image. = + Volume of Reconstructions Extended Focused Image Depth Map DH and Three-Dimensional Imaging -- 18th March 2008
89. Neighbourhood Approach Our algorithm computes one depth value for every [n x n] pixel block in a reconstruction. We have developed a second Extended Focused Image technique which can reduce the error in the EFI. In this technique instead of taking one pixel out of our reconstruction at the estimated depth, we take the [n x n] pixel block that was used to calculate the depth value. In this way we average pixel intensities based on the depth value with the aim of smoothing error regions. DH and Three-Dimensional Imaging -- 18th March 2008
110. Conclusion We have demonstrated and discussed the process for creating a depth map from a set of reconstructions from a digital hologram. We have also demonstrated the first EFI's for digital holograms containing macroscopic objects. We have discussed the selection of block size and step size in our depth-from-focus algorithm. Our implementation is currently limited by the lengthy computation time our algorithm requires on serial machines, we are in the process of addressing this and expect to have reasonable computation times on a single machine. DH and Three-Dimensional Imaging -- 18th March 2008
111. Questions Front Focal Plane Back Focal Plane EFI- Pointwise Approach EFI – Neighborhood Approach DH and Three-Dimensional Imaging -- 18th March 2008