The main objective of this thesis is to provide tools for an expressive and real-time synthesis of sounds resulting from physical interactions of various objects in a 3D virtual environment. Indeed, these sounds, such as collisions sounds or sounds from continuous interaction between surfaces, are
difficult to create in a pre-production process since they are highly dynamic and vary drastically depending on the interaction and objects. To achieve this goal, two approaches are proposed; the first one is based on simulation of physical phenomena responsible for sound production, the second one is based on the processing of a recordings database.
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videosCodiax
The document summarizes a talk on multi-modal self-supervised learning from videos. It discusses using multiple modalities like vision, audio and language from videos for self-supervised learning. It presents two models: 1) A Multi-Modal Versatile network that can take any modality as input and respects the specificity of each while enabling comparison. 2) BraVe which learns representations by regressing a broad representation of the whole video from a narrow view to leverage different augmentations and modalities. Both models achieve state-of-the-art results on downstream tasks, showing videos provide rich self-supervision and using additional context improves representation learning.
Multimodal Analysis for Bridging Semantic Gap with Biologically Inspired Algo...techkrish
The amount and complexity of digital media being generated, stored, transmitted, analysed and accessed has exponentially increased as a result of advances in computer and Web technologies. Much of this information combines digital images, video, audio, graphics and textual data. Large-scale online video repositories enable users to creatively share material along a wide audience. Consequently, there is an increasing interest in associating media items with free-text annotations, ranging from simple titles and detailed descriptions of the video content. In an effort to reduce the complexity of the annotation task, this talk will outline some of the techniques developed for indexing large-scale multimedia repositories by exploiting multi-modality of information space. One such approach combines the use of semantic expansion and visual analysis for predicting user tags for online videos. The framework is designed to exploit visual features using biologically inspired algorithms and associated textual metadata, which is semantically, expanded using complementary textual resources. The experimental results indicate the usefulness of the proposed approach for analysing large-scale media items.
The document discusses using binary partition trees (BPT) as a structured region-based representation for hyperspectral imagery. It introduces hyperspectral imagery and BPTs. It then discusses constructing a BPT for hyperspectral images by merging regions based on a criterion, and using pruning strategies on the BPT for tasks like object detection. The aim is to leverage BPTs for hyperspectral image analysis through construction of the BPT and subsequent pruning.
This document discusses various aspects of text and multimedia, including:
- Text attributes that can be changed like font, style, size, color, and effects to emphasize text.
- Common font types like serif, sans-serif, and script and their uses.
- Text formatting considerations like leading, kerning, and readability.
- Using text and its design to set mood and complement graphics in a multimedia project.
This document provides an overview of recent developments in sound recognition techniques. It discusses several methods for sound recognition, including matching pursuit algorithms with MFCC features, probabilistic distance support vector machines using generalized gamma modeling of STE features, and frequency vector principal component analysis. The document also reviews related literature on environmental sound recognition using time-frequency audio features and sound event recognition. It aims to present an updated survey on sound recognition methods and discuss future research trends in the field.
Removal of noise is a determining track in
the image rebuilding process, but denoising of image remains a
claiming problem in upcoming analysis accomplice along
image processing. Denoising is utilized to expel the noise from
corrupted image, where as we need to maintain the edges and
other detailed characteristics almost accessible. This noise gets
imported during accretion, transmitting & receiving and
storage & retrieval techniques. In this paper, to discover out
denoised image the modified denoising technique and the local
adaptive wavelet image denoising technique can be obtained.
The input (noisy image) is denoised with the help of modified
denoising technique which is form on wavelet domain as well as
spatial domain along with the local adaptive wavelet image
denoising technique which is form on wavelet domain. In this
paper, I have appraised and analyzed achievements of
modified denoising technique and the local adaptive wavelet
image denoising technique. The above procedures are
contemplated with other based on PSNR between input image
and noisy image and SNR between input image and denoised
image. Simulation and experimental outgrowth for an image
reflects as the mean square error of the local adaptive wavelet
image denoising procedure is less efficient as compare to
modified denoising procedure including the signal to noise
ratio of the local adaptive wavelet image denoising technique is
effective than other approach. Therefore, the image after
denoising has a superior visual effect. In this paper, these two
techniques are materialized with the help of MATLAB for
denoising of image
Lossy Compression Using Stationary Wavelet Transform and Vector QuantizationOmar Ghazi
This document is a thesis submitted by Omar Ghazi Abbood Khukre to the Department of Information Technology at Alexandria University in partial fulfillment of the requirements for a Master's degree in Information Technology. The thesis proposes a lossy image compression approach using Stationary Wavelet Transform and Vector Quantization. It includes acknowledgments, an abstract, table of contents, list of figures/tables, and chapters on introduction, background/literature review, the proposed lossy compression method, experiments and results analysis, and conclusion.
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videosCodiax
The document summarizes a talk on multi-modal self-supervised learning from videos. It discusses using multiple modalities like vision, audio and language from videos for self-supervised learning. It presents two models: 1) A Multi-Modal Versatile network that can take any modality as input and respects the specificity of each while enabling comparison. 2) BraVe which learns representations by regressing a broad representation of the whole video from a narrow view to leverage different augmentations and modalities. Both models achieve state-of-the-art results on downstream tasks, showing videos provide rich self-supervision and using additional context improves representation learning.
Multimodal Analysis for Bridging Semantic Gap with Biologically Inspired Algo...techkrish
The amount and complexity of digital media being generated, stored, transmitted, analysed and accessed has exponentially increased as a result of advances in computer and Web technologies. Much of this information combines digital images, video, audio, graphics and textual data. Large-scale online video repositories enable users to creatively share material along a wide audience. Consequently, there is an increasing interest in associating media items with free-text annotations, ranging from simple titles and detailed descriptions of the video content. In an effort to reduce the complexity of the annotation task, this talk will outline some of the techniques developed for indexing large-scale multimedia repositories by exploiting multi-modality of information space. One such approach combines the use of semantic expansion and visual analysis for predicting user tags for online videos. The framework is designed to exploit visual features using biologically inspired algorithms and associated textual metadata, which is semantically, expanded using complementary textual resources. The experimental results indicate the usefulness of the proposed approach for analysing large-scale media items.
The document discusses using binary partition trees (BPT) as a structured region-based representation for hyperspectral imagery. It introduces hyperspectral imagery and BPTs. It then discusses constructing a BPT for hyperspectral images by merging regions based on a criterion, and using pruning strategies on the BPT for tasks like object detection. The aim is to leverage BPTs for hyperspectral image analysis through construction of the BPT and subsequent pruning.
This document discusses various aspects of text and multimedia, including:
- Text attributes that can be changed like font, style, size, color, and effects to emphasize text.
- Common font types like serif, sans-serif, and script and their uses.
- Text formatting considerations like leading, kerning, and readability.
- Using text and its design to set mood and complement graphics in a multimedia project.
This document provides an overview of recent developments in sound recognition techniques. It discusses several methods for sound recognition, including matching pursuit algorithms with MFCC features, probabilistic distance support vector machines using generalized gamma modeling of STE features, and frequency vector principal component analysis. The document also reviews related literature on environmental sound recognition using time-frequency audio features and sound event recognition. It aims to present an updated survey on sound recognition methods and discuss future research trends in the field.
Removal of noise is a determining track in
the image rebuilding process, but denoising of image remains a
claiming problem in upcoming analysis accomplice along
image processing. Denoising is utilized to expel the noise from
corrupted image, where as we need to maintain the edges and
other detailed characteristics almost accessible. This noise gets
imported during accretion, transmitting & receiving and
storage & retrieval techniques. In this paper, to discover out
denoised image the modified denoising technique and the local
adaptive wavelet image denoising technique can be obtained.
The input (noisy image) is denoised with the help of modified
denoising technique which is form on wavelet domain as well as
spatial domain along with the local adaptive wavelet image
denoising technique which is form on wavelet domain. In this
paper, I have appraised and analyzed achievements of
modified denoising technique and the local adaptive wavelet
image denoising technique. The above procedures are
contemplated with other based on PSNR between input image
and noisy image and SNR between input image and denoised
image. Simulation and experimental outgrowth for an image
reflects as the mean square error of the local adaptive wavelet
image denoising procedure is less efficient as compare to
modified denoising procedure including the signal to noise
ratio of the local adaptive wavelet image denoising technique is
effective than other approach. Therefore, the image after
denoising has a superior visual effect. In this paper, these two
techniques are materialized with the help of MATLAB for
denoising of image
Lossy Compression Using Stationary Wavelet Transform and Vector QuantizationOmar Ghazi
This document is a thesis submitted by Omar Ghazi Abbood Khukre to the Department of Information Technology at Alexandria University in partial fulfillment of the requirements for a Master's degree in Information Technology. The thesis proposes a lossy image compression approach using Stationary Wavelet Transform and Vector Quantization. It includes acknowledgments, an abstract, table of contents, list of figures/tables, and chapters on introduction, background/literature review, the proposed lossy compression method, experiments and results analysis, and conclusion.
Audio Steganography Coding Using the Discreet Wavelet TransformsCSCJournals
The performance of audio steganography compression system using discreet wavelet transform (DWT) is investigated. Audio steganography coding is the technology of transforming stego-speech into efficiently encoded version that can be decoded in the receiver side to produce a close representation of the initial signal (non compressed). Experimental results prove the efficiency of the used compression technique since the compressed stego-speech are perceptually intelligible and indistinguishable from the equivalent initial signal, while being able to recover the initial stego-speech with slight degradation in the quality .
Spatio-temporal control of light in complex mediaSébastien Popoff
The document discusses measuring and utilizing transmission matrices to control light propagation through complex scattering media. It describes how the transmission matrix can be measured and used to focus light through scattering samples or transfer image information. Applications include focusing light to target areas and reconstructing images despite multiple scattering within biological tissues or other disordered materials.
A Brief Introduction of Anomalous Sound Detection: Recent Studies and Future...Yuma Koizumi
Presentation slide for AI seminar at Artificial Intelligence Research Center, The National Institute of Advanced Industrial Science and Technology, Japan.
URL (in Japanese): https://www.airc.aist.go.jp/seminar_detail/seminar_046.html
Analysis of PEAQ Model using Wavelet Decomposition Techniquesidescitation
Digital broadcasting, internet audio and music database make use of audio
compression and coding techniques to reduce high quality audio signal without impairing its
perceptual quality. Audio signal compression is the lossy compression
technique, It
converts original converting audio signal into compressed bitstream. The compressed audio
bitstream is decoded at the decoder to produce a close approximation of the original signal.
For the purpose of improving the coding this work attempts to verify the perceptual
evaluation of audio quality (PEAQ) model in BS.1387 using wavelet decomposition
techniques. Finally the comparison of masking threshold for sub-bands using Wavelet
techniques and Fast Fourier transform (FFT) will be done
- Compressive sensing (CS) theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use
- CS relies on two principle :
sparsity: which pertains to the signal of interest
In coherence : which pertains to the sensing modality
Performance Analysis of Digital Watermarking Of Video in the Spatial Domainpaperpublications3
Abstract:In this paper, we have suggested the spatial domain method for the digital video watermarking for both visible and invisible watermarks. The methods are used for the copyright protection as well as proof of ownership. In this paper we first extracted the frames from the video and then used spatial domain characteristics of the frames where we directly worked on the pixel value of the frame according to the watermark and calculated different parameters.
Keywords:Digital video watermarking, copyright protection, spatial domain watermarking, Least Significant bit substitution.
Brunelli 2008: template matching techniques in computer visionzukun
The document discusses template matching techniques in computer vision. It begins with an overview that defines template matching and discusses some common computer vision tasks it can be used for, like object detection. It then covers topics like detection as hypothesis testing, training and testing techniques, and provides a bibliography.
On the scattering of light : various models and methods used in computer grap...Toru Tamaki
The document discusses reflection, transmission, and scattering of light. It begins by explaining diffuse reflection, specular reflection, and the bidirectional reflectance distribution function (BRDF) which models both. It then discusses diffuse transmission, specular transmission, and the bidirectional transmission distribution function (BTDF). Next it introduces the bidirectional scattering surface reflectance distribution function (BSSRDF) and different models for it including dipole/multipole models and plane-parallel approximations. It notes issues with the BSSRDF. The document concludes by discussing scattering models for participating media including absorption, emission, in-scattering and out-scattering, and rendering equations such as the airlight approximation, Born series, and Neumann series to model scattering
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Audio Steganography Coding Using the Discreet Wavelet TransformsCSCJournals
The performance of audio steganography compression system using discreet wavelet transform (DWT) is investigated. Audio steganography coding is the technology of transforming stego-speech into efficiently encoded version that can be decoded in the receiver side to produce a close representation of the initial signal (non compressed). Experimental results prove the efficiency of the used compression technique since the compressed stego-speech are perceptually intelligible and indistinguishable from the equivalent initial signal, while being able to recover the initial stego-speech with slight degradation in the quality .
Spatio-temporal control of light in complex mediaSébastien Popoff
The document discusses measuring and utilizing transmission matrices to control light propagation through complex scattering media. It describes how the transmission matrix can be measured and used to focus light through scattering samples or transfer image information. Applications include focusing light to target areas and reconstructing images despite multiple scattering within biological tissues or other disordered materials.
A Brief Introduction of Anomalous Sound Detection: Recent Studies and Future...Yuma Koizumi
Presentation slide for AI seminar at Artificial Intelligence Research Center, The National Institute of Advanced Industrial Science and Technology, Japan.
URL (in Japanese): https://www.airc.aist.go.jp/seminar_detail/seminar_046.html
Analysis of PEAQ Model using Wavelet Decomposition Techniquesidescitation
Digital broadcasting, internet audio and music database make use of audio
compression and coding techniques to reduce high quality audio signal without impairing its
perceptual quality. Audio signal compression is the lossy compression
technique, It
converts original converting audio signal into compressed bitstream. The compressed audio
bitstream is decoded at the decoder to produce a close approximation of the original signal.
For the purpose of improving the coding this work attempts to verify the perceptual
evaluation of audio quality (PEAQ) model in BS.1387 using wavelet decomposition
techniques. Finally the comparison of masking threshold for sub-bands using Wavelet
techniques and Fast Fourier transform (FFT) will be done
- Compressive sensing (CS) theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use
- CS relies on two principle :
sparsity: which pertains to the signal of interest
In coherence : which pertains to the sensing modality
Performance Analysis of Digital Watermarking Of Video in the Spatial Domainpaperpublications3
Abstract:In this paper, we have suggested the spatial domain method for the digital video watermarking for both visible and invisible watermarks. The methods are used for the copyright protection as well as proof of ownership. In this paper we first extracted the frames from the video and then used spatial domain characteristics of the frames where we directly worked on the pixel value of the frame according to the watermark and calculated different parameters.
Keywords:Digital video watermarking, copyright protection, spatial domain watermarking, Least Significant bit substitution.
Brunelli 2008: template matching techniques in computer visionzukun
The document discusses template matching techniques in computer vision. It begins with an overview that defines template matching and discusses some common computer vision tasks it can be used for, like object detection. It then covers topics like detection as hypothesis testing, training and testing techniques, and provides a bibliography.
On the scattering of light : various models and methods used in computer grap...Toru Tamaki
The document discusses reflection, transmission, and scattering of light. It begins by explaining diffuse reflection, specular reflection, and the bidirectional reflectance distribution function (BRDF) which models both. It then discusses diffuse transmission, specular transmission, and the bidirectional transmission distribution function (BTDF). Next it introduces the bidirectional scattering surface reflectance distribution function (BSSRDF) and different models for it including dipole/multipole models and plane-parallel approximations. It notes issues with the BSSRDF. The document concludes by discussing scattering models for participating media including absorption, emission, in-scattering and out-scattering, and rendering equations such as the airlight approximation, Born series, and Neumann series to model scattering
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ph.D. Defense: Expressive Sound Synthesis for Animation
1. t
Expressive Sound Synthesis
For Animation
Cécile Picard-Limpens
University of Nice/Sophia-Antipolis
École Doctorale STIC
REVES INRIA Sophia-Antipolis, France
Advisors: George Drettakis, INRIA Sophia Antipolis (Reves)
François Faure, INRIA Rhône-Alpes (Evasion)
Nicolas Tsingos, DOLBY Laboratories, CA, USA
Defense for Ph.D. in Computer Science
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
1
2. t
Outline
1 Sound and Virtuality
2 Physics-Based Sound Synthesis
Contact Modeling
Resonator Modeling
3 Example-Based Synthesis
Flexible Sound Synthesis
4 Perspectives on a Hybrid Model
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
2
3. t
Sound Rendering
Sound and
Virtuality
General Background
for Virtual Reality and Games
Motivation
Physics-Based
Synthesis
Example-Based Interactive Audio Rendering
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
(R. Vantielcke - WipeoutHD on Playstation 3)
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
3
4. t
Sound Rendering
Sound and
Virtuality
General Background
for Virtual Reality and Games
Motivation
Physics-Based
Synthesis
Example-Based Interactive Audio Rendering
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
(R. Vantielcke - WipeoutHD on Playstation 3)
Traditional Approach
Pre-Recordings Triggered
+ : Easy to implement
– : Repetitive audio, discrepancies, lack of flexibility
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
3
5. t
From Playback of Samples
Sound and
Virtuality
General Background
to Synthesis
Motivation
Physics-Based
Synthesis
Digital Sound Synthesis
Example-Based
Synthesis Source modeling ←
Perspectives on Sound propagation, Sound reception
a Hybrid Model
Conclusion and Techniques
Discussion
Rigid body simulation
Finite Element Method (FEM)
(ArtiSynth)
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
4
6. t
From Playback of Samples
Sound and
Virtuality
General Background
to Synthesis
Motivation
Physics-Based
Synthesis
Digital Sound Synthesis
Example-Based
Synthesis Source modeling ←
Perspectives on Sound propagation, Sound reception
a Hybrid Model
Conclusion and Techniques
Discussion
Rigid body simulation
Finite Element Method (FEM)
(ArtiSynth)
Physical Sound Simulation
+ : Physical approach, easy parametrization,
Low memory usage
– : Preprocess computation,
Interface between physics and sound system
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
4
7. t
Controlling the Sound Simulation
Sound and
Virtuality Challenges
General Background
Motivation
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
Sound Coherent With Visuals
a Hybrid Model
Conclusion and
Unpredictable character of sounds
Discussion
Real-time sound synthesis
Parametrization and Expressiveness
Control and interactivity
Authoring
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
5
8. t
Our Contribution
Sound and
Virtuality Three Research Axes
General Background
Motivation
Physics-Based
Synthesis
Example-Based
Synthesis
Physics-Based Sound synthesis
Perspectives on
a Hybrid Model Contact modeling
Conclusion and Resonator modeling
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
6
9. t
Our Contribution
Sound and
Virtuality Three Research Axes
General Background
Motivation
Physics-Based
Synthesis
Example-Based
Synthesis
Physics-Based Sound synthesis
Perspectives on
a Hybrid Model Contact modeling
Conclusion and Resonator modeling
Discussion
Example-Based Sound Synthesis
Automatic analysis of pre-recordings
Flexible synthesis for physics-driven animation
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
6
10. t
Our Contribution
Sound and
Virtuality Three Research Axes
General Background
Motivation
Physics-Based
Synthesis
Example-Based
Synthesis
Physics-Based Sound synthesis
Perspectives on
a Hybrid Model Contact modeling
Conclusion and Resonator modeling
Discussion
Example-Based Sound Synthesis
Automatic analysis of pre-recordings
Flexible synthesis for physics-driven animation
Perspectives on a Hybrid Model
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
6
11. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
2 Physics-Based Sound Synthesis
Resonator Modeling Contact Modeling
A Robust and
Multi-Scale Modal
Analysis
Resonator Modeling
Example-Based
Synthesis 3 Example-Based Synthesis
Perspectives on Flexible Sound Synthesis
a Hybrid Model
Conclusion and 4 Perspectives on a Hybrid Model
Discussion
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
7
12. t
Sound from Contacts
Sound and
Virtuality
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Dichotomy
Contacts
Resonator Modeling Impacts
A Robust and
Multi-Scale Modal
Analysis
Continuous contacts
Example-Based
Synthesis Two Schemes for Contact Force Modelling
Perspectives on
a Hybrid Model
Feed-forward scheme
Conclusion and [van den Doel et al. 01]
Discussion
Additive synthesis
Direct computation of contact forces
[Avanzini et al. 02]
Bristle model
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
8
13. t
Contact Modeling
Sound and
Virtuality
Physics-Based
Synthesis
Contact Modeling
What Are The Current Limitations
Audio Texture
Synthesis For Complex
for Continuous Contacts?
Contacts
Resonator Modeling
Rate for physics engine report
A Robust and
Multi-Scale Modal
Analysis No geometric details when using visual textures
Example-Based
Synthesis
Authoring and control are challenging
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
9
14. t
Contact Modeling
Sound and
Virtuality
Physics-Based
Synthesis
Contact Modeling
What Are The Current Limitations
Audio Texture
Synthesis For Complex
for Continuous Contacts?
Contacts
Resonator Modeling
Rate for physics engine report
A Robust and
Multi-Scale Modal
Analysis No geometric details when using visual textures
Example-Based
Synthesis
Authoring and control are challenging
Perspectives on
a Hybrid Model
HOW Can We Solve Them?
Conclusion and
Discussion By extracting
Excitation profiles from visual textures
with
Adaptive resolution
[Picard et al., VRIPHYS 08]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
9
15. t
Method for Impact Sounds
Sound and
Virtuality
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
10
16. t
Method for Continuous Contact Sounds
Sound and
Virtuality Extraction of Excitation Profiles
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
11
17. t
Synthesis of Excitation Profiles
Sound and
Virtuality For the Audio Force Modelling
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts Technique
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Extraction from the visual texture image
Example-Based Re-sampling along the trajectory
Synthesis of the contact interaction (60Hz vs 44kHz)
Perspectives on
a Hybrid Model
Conclusion and
Based on the Complexity of the Histogram
Discussion
Simple texture image:
Gradient of the image intensity
Complex texture image:
Isocurves of constant brightness (isophotes)
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
12
18. t
Complex Textures
Sound and
Virtuality Coding the Excitation Profiles
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Isophotes = Large amount of data
Contacts
Resonator Modeling How Can We Lighten the Info?
A Robust and
Multi-Scale Modal
Analysis
Example-Based
By Coding the Excitation Profiles
Synthesis = Main Features + Noise Part
Perspectives on
a Hybrid Model
Conclusion and
Discussion
= +
Noise Part: Statistical approximation
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
13
19. t
Real-Time Audio Management
Sound and
Virtuality A Flexible Audio Pipeline
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling Simulations Driven by Ageia’s PhysX (now NVIDIA)
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
14
20. t
Audio Texture Synthesis
Sound and
Virtuality A Solution for Interactive Simulations
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
A Sound in Coherence with Visuals
Synthesis
Perspectives on
a Hybrid Model
Flexible Resolution
Conclusion and
Discussion Adapted to Procedural Generation
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
15
21. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
2 Physics-Based Sound Synthesis
Resonator Modeling Contact Modeling
A Robust and
Multi-Scale Modal
Analysis
Resonator Modeling
Example-Based
Synthesis 3 Example-Based Synthesis
Perspectives on Flexible Sound Synthesis
a Hybrid Model
Conclusion and 4 Perspectives on a Hybrid Model
Discussion
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
16
22. t
Vibration Models
Sound and
Virtuality Modal Analysis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts Generating Sounds Based on Physics Simulation
Resonator Modeling
A Robust and
Multi-Scale Modal
In computer musics
Analysis
[Iovino et al. 97, Cook 02]
Example-Based
Synthesis In computer graphics
Perspectives on [Van Den Doel 01, O Brien et al. 02]
a Hybrid Model
Conclusion and
Discussion Improvements for Interactive Sound Rendering
Modal parameter tracking
[Maxwell et al. 07]
Frequency content sparsity
[Bonneel et al. 08]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
17
23. t
Vibration Models
Sound and
Virtuality Modal Analysis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
1 Get a Sounding Object and its Geometry
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis 2 Construct the FEM (ex: Tetrahedral Mesh)
Perspectives on
a Hybrid Model
3 Apply Newton Second Law to DOF
Conclusion and
Discussion ¨ ˙
M d + C d + Kd = f (1)
4 Eigendecomposition ⇒ Modal Parameters
M = LL−T ; L−1 KL−T = V ΛV T (2)
where V = matrix of eigenvectors
Λ = diagonal matrix of eigenvalues
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
18
24. t
Vibration Models
Sound and
Virtuality Modal Analysis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
In Real-time:
Resonator Modeling
A Robust and
Modal synthesis
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
1
s(t) = ai sin(wi t)e −di t (3)
n
Control for vibration models
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
19
25. t
Vibration Models
Sound and
Virtuality Modal Analysis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
What Are
Synthesis For Complex
Contacts The Current Limitations?
Resonator Modeling
A Robust and
Multi-Scale Modal Meshing is difficult
Analysis
Example-Based
No real control on the FEM resolution
Synthesis
No clear interface between physics and audio
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
20
26. t
Vibration Models
Sound and
Virtuality Modal Analysis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
What Are
Synthesis For Complex
Contacts The Current Limitations?
Resonator Modeling
A Robust and
Multi-Scale Modal Meshing is difficult
Analysis
Example-Based
No real control on the FEM resolution
Synthesis
No clear interface between physics and audio
Perspectives on
a Hybrid Model
Conclusion and
Discussion
HOW Can We Solve Them?
By proposing
A robust and multi-scale modal analysis
which is
Coherent with the physics simulation
[Picard et al., DAFx 09]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
20
27. t
Our Deformation Model
Sound and
Virtuality
Physics-Based
Synthesis Inspired from Work by Nesme et al.
Contact Modeling
Audio Texture
[Nesme et al. 06]
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and Technique
Multi-Scale Modal
Analysis Merged voxels used as Hexahedral Finite Elements
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
Implementation with the Sofa Framework
Validation of the Model
Tests on a metal cube
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
21
28. t
Robustness
Sound and
Virtuality
Physics-Based
Synthesis
Contact Modeling
Audio Texture Robust Even for Non-Manifold Geometries
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
Material: Aluminium
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
22
29. t
Multi-Scale for Efficient Memory Usage
Sound and
Virtuality
Physics-Based
A Squirrel in Pine Wood
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
23
30. t
Multi-Scale for Efficient Memory Usage
Sound and
Virtuality
Physics-Based
A Squirrel in Pine Wood: Different FE resolutions
Synthesis
Contact Modeling
Audio Texture
3x3x3 4x4x4 8x8x8 9x9x9
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Frequency Content = f (Hexahedral FE Resolution)
Conclusion and
Discussion
Higher resolution models
Frequency centroid shift
Convergence of the frequency content
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
24
31. t
Comparison with Classical Approach
Sound and
Virtuality
Physics-Based Sounding Bowl - Material: Aluminium
Synthesis
Contact Modeling
Audio Texture Classical Approach Our Approach
Synthesis For Complex
Contacts (816 modes) (75 modes)
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
25
32. t
A Robust and Multi-Scale Modal Analysis
Sound and
Virtuality A Solution for Sound Synthesis
Physics-Based
Synthesis
Contact Modeling
Audio Texture
Synthesis For Complex
Contacts
Resonator Modeling
A Robust and
Multi-Scale Modal
Analysis
Example-Based Realistic
Synthesis
Perspectives on
a Hybrid Model Adapted to Non-Manifold Geometries
Conclusion and
Discussion Resources Flexibility
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
26
33. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Example-Based
Synthesis 2 Physics-Based Sound Synthesis
Flexible Sound
Synthesis Contact Modeling
Retargetting Example
Sounds
Resonator Modeling
Perspectives on
a Hybrid Model
3 Example-Based Synthesis
Conclusion and
Discussion Flexible Sound Synthesis
4 Perspectives on a Hybrid Model
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
27
34. t
Implementation of Signal-Based Models
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Flexible Sound
Concatenative Synthesis
Synthesis
Retargetting Example
[Roads 91, Schwarz 06]
Sounds
Perspectives on Sound Textures Based on Physics
a Hybrid Model
[Cook 99]
Conclusion and
Discussion [Dobashi et al. 03, Zheng et al. 09] Dobashi et al. 03
Authoring and Interactive Control
[Cook 02]
Cook 99
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
28
35. t
Implementation of Signal-Based Models
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
What Are
Flexible Sound
Synthesis
The Current Limitations?
Retargetting Example
Sounds
Processing is not generic
Perspectives on
a Hybrid Model Parametrizing is difficult
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
29
36. t
Implementation of Signal-Based Models
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
What Are
Flexible Sound
Synthesis
The Current Limitations?
Retargetting Example
Sounds
Processing is not generic
Perspectives on
a Hybrid Model Parametrizing is difficult
Conclusion and
Discussion
HOW Can We Solve Them?
By
Retargetting example sounds
To physics-driven animation
[Picard et al., AES 09]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
29
37. t
Our Approach
Sound and
Virtuality
Physics-Based
Synthesis SINUSOIDAL
AUDIO
RECORDING
Example-Based TRANSIENT
Synthesis OBJECT GEOMETRY
VIRTUAL ENVIRONMENT
Flexible Sound
1 DICTIONARY OF AUDIO GRAINS
Synthesis
Impulsive / Continuous
Retargetting Example BUILD COLLISION
Sounds STRUCTURES
Perspectives on 2 CORRELATION PATTERNS
a Hybrid Model DEFINE PROCEDURES
PREPROCESSING
INTERACTIVE
Conclusion and RETARGETTING RIGID-BODY
Discussion TO ANIMATION SIMULATION
AUDIO RENDERER VIDEO RENDERER
ANIMATION WITH AUDIO
Amplitude
Time
Our Contributions
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
30
38. t
Preprocess: A Generic Analysis
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Flexible Sound
Synthesis Impulsive and Continuous Contacts
Retargetting Example
Sounds
Spectral Modeling Synthesis (SMS) [Serra 97]
Perspectives on
a Hybrid Model
Conclusion and
Automatic Extraction of Audio Grains
Discussion Dictionary: Impulsive/Continuous
Generation of Correlation Patterns
between original recordings and audio grains
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
31
39. t
On-Line: Flexible Sound Synthesis
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis Resynthesis of the Original Recordings
Flexible Sound
Synthesis Candidate grains: max. correlation amplitude
Retargetting Example
Sounds
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
32
40. t
On-Line: Flexible Sound Synthesis
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis Resynthesis of the Original Recordings
Flexible Sound
Synthesis Candidate grains: max. correlation amplitude
Retargetting Example
Sounds
Perspectives on Interactive Physics-Driven Animations
a Hybrid Model
Physics Info for Retargetting
Conclusion and
Discussion Contact type: impulsive or continuous?
Penetration force and relative velocity
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
32
41. t
On-Line: Flexible Sound Synthesis
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis Resynthesis of the Original Recordings
Flexible Sound
Synthesis Candidate grains: max. correlation amplitude
Retargetting Example
Sounds
Perspectives on Interactive Physics-Driven Animations
a Hybrid Model
Physics Info for Retargetting
Conclusion and
Discussion Contact type: impulsive or continuous?
Penetration force and relative velocity
Flexible Audio Shading Approach
Additional, User-defined Resynthesis Schemes
Spectral domain adaptation/modification
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
32
42. t
Resynthesis of the Original Recordings
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Flexible Sound
94 recordings (14.6Mb)
Synthesis ≈ 5000 grains + 94 Correlation Patterns (20% Gain)
Retargetting Example
Sounds
Perspectives on Breaking Glass
a Hybrid Model
Conclusion and Shooting Gun
Discussion
Rolling
Additional Material:
http://www-sop.inria.fr/members/Cecile.Picard/
"‘Supplemental AES"’
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
33
43. t
Flexible Audio Shading Approach
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Easy Implementation of Time-Scaling
Flexible Sound
Synthesis
Retargetting Example
Faster Rolling
Sounds
Perspectives on Slower Breaking
a Hybrid Model
Conclusion and
Discussion Synthesis of An Infinity Similar Audio Events
by varying the audio content
Rythmic pattern from Breaking Stone
New material content: stone and gun
Rythmic pattern from Breaking Glass
New material content: ceramic
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
34
44. t
Interactive Physics-Driven Animations
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Flexible Sound
Simulations Driven by Sofa Framework
Synthesis
Retargetting Example
Sounds
Perspectives on
a Hybrid Model
Conclusion and
Discussion
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
35
45. t
Retargetting Example Sounds
Sound and
Virtuality A Solution for Interactive Simulations
Physics-Based
Synthesis
Example-Based
Synthesis
Flexible Sound
Synthesis
Retargetting Example
Variety
Sounds
Perspectives on
a Hybrid Model Adapted to Scenarios
Conclusion and
Discussion
Small Memory Footprint
Real-Time Rendering
An attractive solution for industrial applications
(Eden Games, an ATARI game studio)
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
36
46. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Example-Based
Synthesis 2 Physics-Based Sound Synthesis
Perspectives on
a Hybrid Model
Contact Modeling
Motivation Resonator Modeling
A Hybrid Model for
Fracture Events
Conclusion and
3 Example-Based Synthesis
Discussion
Flexible Sound Synthesis
4 Perspectives on a Hybrid Model
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
37
47. t
Sound Modeling
Sound and
Virtuality When Nonlinearity Occurs
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Motivation
Problems of Single Models
A Hybrid Model for
Fracture Events Vibration models assume linearity
Conclusion and
Discussion
Example-based sounds are hard to parametrize
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
38
48. t
Sound Modeling
Sound and
Virtuality When Nonlinearity Occurs
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Motivation
Problems of Single Models
A Hybrid Model for
Fracture Events Vibration models assume linearity
Conclusion and
Discussion
Example-based sounds are hard to parametrize
Previous Work
Modeling nonlinearities
[O Brien et al. 01, Chadwick et al. 09]
[Cook 02]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
38
49. t
Fracture Events
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based Background
Synthesis
Frequently occur in virtual environments
Perspectives on
a Hybrid Model
Motivation Visual rendering
A Hybrid Model for
Fracture Events [O Brien et al. 99, 02]
Conclusion and [Parker and O Brien. 09]
Discussion
Sound rendering: Little research
[Warren et al. 84] [Rath et al. 03]
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
39
50. t
Fracture Events
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based Background
Synthesis
Frequently occur in virtual environments
Perspectives on
a Hybrid Model
Motivation Visual rendering
A Hybrid Model for
Fracture Events [O Brien et al. 99, 02]
Conclusion and [Parker and O Brien. 09]
Discussion
Sound rendering: Little research
[Warren et al. 84] [Rath et al. 03]
Challenges
Event depends on the material involved
Differents phases emerge from fracture event
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
39
51. t
Parametrization of Our Hybrid Model
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis Selection Criteria
Perspectives on Hybrid model applied when nonlinearity occurs
a Hybrid Model
Motivation
A Hybrid Model for
Fracture Events Techniques
Conclusion and
Discussion
FM synthesis
Audio grains
FM synthesis
Parametrization
Smooth transition with vibration model
Coherence inside the hybrid model
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
40
52. t
Discussion
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Motivation Prospective model
A Hybrid Model for
Fracture Events
Conclusion and Possible problem: report from the physics engine
Discussion
Simplicity of the tools allows real-time rendering
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
41
53. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Example-Based
Synthesis 2 Physics-Based Sound Synthesis
Perspectives on
a Hybrid Model
Contact Modeling
Conclusion and
Resonator Modeling
Discussion
Contributions 3 Example-Based Synthesis
Extensions and
Applications
Flexible Sound Synthesis
4 Perspectives on a Hybrid Model
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
42
54. t
Synthesis of Sounds for Animation
Sound and
Virtuality Difficulties
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Audio-Visual Coherence
Discussion
Contributions
Extensions and
Applications
Extremely Dynamic Character
Precision of Synthesis
Large Variety of Objects
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
43
55. t
Contributions
Sound and
Virtuality An Overview
Physics-Based
Synthesis
Example-Based Complex Contact Modeling
Synthesis
Perspectives on
2D visual textures used as roughness maps
a Hybrid Model Audible and position-dependent variations
Conclusion and Detail-layer mechanisms
Discussion
Contributions
Extensions and
Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
44
56. t
Contributions
Sound and
Virtuality An Overview
Physics-Based
Synthesis
Example-Based Complex Contact Modeling
Synthesis
Perspectives on
2D visual textures used as roughness maps
a Hybrid Model Audible and position-dependent variations
Conclusion and Detail-layer mechanisms
Discussion
Contributions
Extensions and
Applications
Improved Modal Analysis for Resonator Modeling
Complex non-manifold geometries can be handled
Multi-scale resolution
Coherence between simulation and audio
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
44
57. t
Contributions
Sound and
Virtuality An Overview
Physics-Based
Synthesis
Example-Based Complex Contact Modeling
Synthesis
Perspectives on
2D visual textures used as roughness maps
a Hybrid Model Audible and position-dependent variations
Conclusion and Detail-layer mechanisms
Discussion
Contributions
Extensions and
Applications
Improved Modal Analysis for Resonator Modeling
Complex non-manifold geometries can be handled
Multi-scale resolution
Coherence between simulation and audio
Flexibility of Sound Design
Audio grains and correlation patterns
Dynamic retargetting to events
Extended sound synthesis capabilities
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
44
58. t
Contributions
Sound and
Virtuality Perspectives
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion
A Prospective Hybrid Model
Contributions for Complex Physical Phenomena
Extensions and
Applications Focus on Nonlinearity
Combination of physically based
and example-based methods
Application Case: Fracture Events
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
45
59. t
Overview
Sound and
Virtuality
Physics-Based
Synthesis 1 Sound and Virtuality
Example-Based
Synthesis 2 Physics-Based Sound Synthesis
Perspectives on
a Hybrid Model
Contact Modeling
Conclusion and
Resonator Modeling
Discussion
Contributions 3 Example-Based Synthesis
Extensions and
Applications
Flexible Sound Synthesis
4 Perspectives on a Hybrid Model
Motivation and Application
5 Conclusion and Discussion
Contributions
Extensions and Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
46
60. t
Promising Directions for Future Work
Sound and
Virtuality
Physics-Based
Synthesis
Complex Contact Modeling
Example-Based
Synthesis Two interacting textures
Perspectives on Surface-based interactions
a Hybrid Model
Adequate perceptual experiments
Conclusion and
Discussion
Contributions
Extensions and
Applications
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
47
61. t
Promising Directions for Future Work
Sound and
Virtuality
Physics-Based
Synthesis
Complex Contact Modeling
Example-Based
Synthesis Two interacting textures
Perspectives on Surface-based interactions
a Hybrid Model
Adequate perceptual experiments
Conclusion and
Discussion
Contributions Improved Modal Analysis for Resonator Modeling
Extensions and
Applications Recent work from [Nesme et al. Siggraph 09]
Investigations with GPU for in-line computation
Complete integration in a virtual scene
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
47
62. t
Promising Directions for Future Work
Sound and
Virtuality
Physics-Based
Synthesis
Complex Contact Modeling
Example-Based
Synthesis Two interacting textures
Perspectives on Surface-based interactions
a Hybrid Model
Adequate perceptual experiments
Conclusion and
Discussion
Contributions Improved Modal Analysis for Resonator Modeling
Extensions and
Applications Recent work from [Nesme et al. Siggraph 09]
Investigations with GPU for in-line computation
Complete integration in a virtual scene
Example-Based Technique
Clustering of similar grains
Statistical analysis of correlation patterns
Physics engine design
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
47
63. t
Promising Directions for Future Work
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model
Conclusion and
Discussion Hybrid Model for Fracture Events
Contributions
Extensions and
Fracture sound simulation framework
Applications
Tracking of relevant physical data
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
48
64. t
Conclusion
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model New Physically Based Algorithms
Conclusion and
Discussion
for Sound Rendering
Contributions
Extensions and
Flexibility of Sound Modeling
Applications
Ideas on an Adequate Hybrid Sound Model
Additional info:
http://www-sop.inria.fr/members/Cecile.Picard/
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
49
65. t
Acknowledgements
Sound and
Virtuality
Physics-Based
Synthesis
Example-Based
Synthesis
Perspectives on
a Hybrid Model George Drettakis, François Faure,
Conclusion and and Nicolas Tsingos
Discussion
Contributions REVES Team
Extensions and
Applications Marie-Paule Cani and the Evasion Team
Paul G. Kry at the McGill University, Montréal
Eden Games, an ATARI game studio, Lyon
C. Picard-Limpens December 4, 2009 Expressive Sound Synthesis For Animation
50