Bitmovin LIVE Tech Talks: 5 Analytics Metrics That MatterBitmovin Inc
During our LIVE event series (NAB 2020 Edition) two of Bitmovin's product experts took some time to define the five analytics metrics that matter most to ensure your OTT or Video Application succeed, why, and how.
These metrics were Video Startup Time, Bitrate Heatmaps, Impressions v Total Hours Watched, Error Type (and %), and rebuffering percentage.
Hear what they had to say in the full on demand recording here: https://go.bitmovin.com/techtalk-live-video-metrics-obsession?utm_source=slideshare
HTTP Adaptive Streaming (HAS) enables high-quality streaming of video content. In HAS, videos are divided into short intervals called segments, and each segment is encoded at various quality/bitrates to adapt to the available bandwidth. Multiple encodings of the same content impose high costs for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reuse the information gathered during its encoding to accelerate the encoding of the remaining representations. As encoding the highest quality representation requires the highest time-complexity compared to the lower quality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and to address this problem, we consider all representations from the highest to the lowest quality representation as a potential, single reference to accelerate the encoding of the other, dependent representations. We formulate a set of encoding modes and assess their performance in terms of BD-Rate and time-complexity, using both VMAF and PSNR as objective metrics. Experimental results show that encoding a middle quality representation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiple representations in parallel. Based on this fact, a fast multirate encoding method is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependent representations.
Video over IP on the Corporate and Public Network plus VidOvation Solutions Overview. VidOvation specializes in video over IP, IPTV, MPEG-4 H.264, JPEG2000, DIRAC SMPTE VC-2, streaming video, storage, CATx extenders, video communication, mobile production and display solutions. We plan to move video, audio and data over public, private, LAN, WAN, coax, twisted pair and wireless networks.
CADLAD: Device-aware Bitrate Ladder Construction for HTTP Adaptive StreamingMinh Nguyen
Considering network conditions, video content, and viewer device type/screen resolution to construct a bitrate ladder is necessary to deliver the best Quality of Experience (QoE). A large-screen device like a TV needs a high bitrate with high resolution to provide good visual quality, whereas a small one like a phone requires a low bitrate with low resolution. In addition, encoding high-quality levels at the server side while the network is unable to deliver them causes unnecessary cost for the content provider. Recently, the Common Media Client Data (CMCD) standard has been proposed, which defines the data that is collected at the client and sent to the server with its HTTP requests. This data is useful in log analysis, quality of service/experience monitoring and delivery improvements.
In this paper, we introduce a CMCD-Aware per-Device bitrate LADder construction (CADLAD) that leverages CMCD to address the above issues. CADLAD comprises components at both client and server sides. The client calculates the top bitrate (tb) — a CMCD parameter to indicate the highest bitrate that can be rendered at the client — and sends it to the server together with its device type and screen resolution. The server decides on a suitable bitrate ladder, whose maximum bitrate and resolution are based on CMCD parameters, to the client device with the purpose of providing maximum QoE while minimizing delivered data. CADLAD has two versions to work in Video on
Demand (VoD) and live streaming scenarios. Our CADLAD is client agnostic; hence, it can work with any players and ABR algorithms at the client. The experimental results show that CADLAD is able to increase the QoE by 2.6x while saving 71% of delivered data, compared to an existing bitrate ladder of an available video dataset. We implement our idea within CAdViSE — an open-source testbed for reproducibility.
Bitmovin LIVE Tech Talks: Data Driven Video WorkflowsBitmovin Inc
Part of Bitmovin's LIVE series, this Tech Talk took a deep-dive into how data can help improve your video workflows; from implementation to management our expert, Daniel Weinberger reviewed some of the most important metrics you need to follow and how you can use them to optimize your video workflows.
View the full recording here: https://go.bitmovin.com/nab-live-data-driven-workflows?utm_source=slideshare
Bitmovin LIVE Tech Talks: 5 Analytics Metrics That MatterBitmovin Inc
During our LIVE event series (NAB 2020 Edition) two of Bitmovin's product experts took some time to define the five analytics metrics that matter most to ensure your OTT or Video Application succeed, why, and how.
These metrics were Video Startup Time, Bitrate Heatmaps, Impressions v Total Hours Watched, Error Type (and %), and rebuffering percentage.
Hear what they had to say in the full on demand recording here: https://go.bitmovin.com/techtalk-live-video-metrics-obsession?utm_source=slideshare
HTTP Adaptive Streaming (HAS) enables high-quality streaming of video content. In HAS, videos are divided into short intervals called segments, and each segment is encoded at various quality/bitrates to adapt to the available bandwidth. Multiple encodings of the same content impose high costs for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reuse the information gathered during its encoding to accelerate the encoding of the remaining representations. As encoding the highest quality representation requires the highest time-complexity compared to the lower quality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and to address this problem, we consider all representations from the highest to the lowest quality representation as a potential, single reference to accelerate the encoding of the other, dependent representations. We formulate a set of encoding modes and assess their performance in terms of BD-Rate and time-complexity, using both VMAF and PSNR as objective metrics. Experimental results show that encoding a middle quality representation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiple representations in parallel. Based on this fact, a fast multirate encoding method is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependent representations.
Video over IP on the Corporate and Public Network plus VidOvation Solutions Overview. VidOvation specializes in video over IP, IPTV, MPEG-4 H.264, JPEG2000, DIRAC SMPTE VC-2, streaming video, storage, CATx extenders, video communication, mobile production and display solutions. We plan to move video, audio and data over public, private, LAN, WAN, coax, twisted pair and wireless networks.
CADLAD: Device-aware Bitrate Ladder Construction for HTTP Adaptive StreamingMinh Nguyen
Considering network conditions, video content, and viewer device type/screen resolution to construct a bitrate ladder is necessary to deliver the best Quality of Experience (QoE). A large-screen device like a TV needs a high bitrate with high resolution to provide good visual quality, whereas a small one like a phone requires a low bitrate with low resolution. In addition, encoding high-quality levels at the server side while the network is unable to deliver them causes unnecessary cost for the content provider. Recently, the Common Media Client Data (CMCD) standard has been proposed, which defines the data that is collected at the client and sent to the server with its HTTP requests. This data is useful in log analysis, quality of service/experience monitoring and delivery improvements.
In this paper, we introduce a CMCD-Aware per-Device bitrate LADder construction (CADLAD) that leverages CMCD to address the above issues. CADLAD comprises components at both client and server sides. The client calculates the top bitrate (tb) — a CMCD parameter to indicate the highest bitrate that can be rendered at the client — and sends it to the server together with its device type and screen resolution. The server decides on a suitable bitrate ladder, whose maximum bitrate and resolution are based on CMCD parameters, to the client device with the purpose of providing maximum QoE while minimizing delivered data. CADLAD has two versions to work in Video on
Demand (VoD) and live streaming scenarios. Our CADLAD is client agnostic; hence, it can work with any players and ABR algorithms at the client. The experimental results show that CADLAD is able to increase the QoE by 2.6x while saving 71% of delivered data, compared to an existing bitrate ladder of an available video dataset. We implement our idea within CAdViSE — an open-source testbed for reproducibility.
Bitmovin LIVE Tech Talks: Data Driven Video WorkflowsBitmovin Inc
Part of Bitmovin's LIVE series, this Tech Talk took a deep-dive into how data can help improve your video workflows; from implementation to management our expert, Daniel Weinberger reviewed some of the most important metrics you need to follow and how you can use them to optimize your video workflows.
View the full recording here: https://go.bitmovin.com/nab-live-data-driven-workflows?utm_source=slideshare
Webinar Slides: Cost of Errors on VoD ServicesBitmovin Inc
All varieties of error types and codes can come at a great cost to OTT and streaming organizations. However, there are countless ways to mitigate these costs and prepare your organization for sustained success. First and foremost, one must understand the types of errors a video stream encounters and their monetary effect on a business’s bottom line. Taking a proactive approach with a granular data set for video analytics will remove the ambiguity of which errors are occurring, how often, where, and why.
Learn about the types of technical errors that videos can experience, their monetary effect on your bottom line, and some of the methods to resolve/prevent these errors from occurring.
View the corresponding whitepaper here: https://bit.ly/2Z2Rkhj
Calculate the cost of errors for your service with our free calculator: https://bitmovin.com/demos/cost-of-errors
Original post: http://blog.sezion.com/post/82079538402/what-is-video-encoding.
Video encoding is the process of converting a video so it can work on any type of platform. Each encoded video contains three parts Formats, Codecs, and Bitrates.
With Sezion’s cloud-based automatic video editing and generation platform (API, SDKs or even our no-coding-required tools) the encoding process is included in our solutions.
Divya Jain at AI Frontiers : Video SummarizationAI Frontiers
As video content is becoming mainstream, video summarization is becoming a hot research topic in academia and industry. Video thumbnail generation and summarization has been worked on for years, but deep learning and reinforcement learning is changing the landscape and emerging as the winner for optimal frame selection. Recent advances in GANs are improving the quality, aesthetics and relevancy of the frames to represent the original videos. Come join this session to get an understanding of various challenges and emerging solutions around video summarization.
How to convert video files to audio format using miro video converter by Rema...Denise Fredeluces
How to convert video files to audio format using Miro Video Converter App by Remarkable Virtual Pro
- How to convert video to audio
- Video file conversion
- Conver video to audio
More tutorials at http://remarkablevirtualpro.wordpress.com
ChatGPT 4-powered voiceovers have proven to be incredibly effective, with businesses reporting a substantial boost in brand perception and customer response. And Ai Video Suite has revolutionized and automated this process. ChatGPT 4 + AI Videos, Voiceovers = The Holy Grail of Video Marketing in 2023. Produce Engaging GPT 4 Powered Voiceovers at your fingertips:
Step 1: Create: Use the built-in GPT 4 AI Vox creator to generate a voiceover script. You could also upload and edit an existing script!
Step 2: Generate and Listen: Simply copy and paste the script into the Vox Generator. Select your preferred language, accent, tone, pitch, and volume. In a matter of minutes, your voiceover is ready to download. Optionally add background music from their built-in library or bring your own. Add this vox to your sales videos, Business ads, explainer and product promos and any other videos, audio books etc.
Step 3: Profit: These voiceovers are yours to use in your own business or sell them to clients for big profits! Rinse and repeat!
Activate Ai Video Suite Apps & ENJOY – Enjoy massive monthly savings, traffic, leads, sales, conversions & profits…. And Live The Of Your Dreams. Don’t Let The Competition Outshine You. Embrace Ai Video Suite and unlock a world of limitless possibilities for your business. Your success story begins now, and they’re here to guide you every step of the way. So why wait? Sign Up Today And Experience The Power Of AI-Driven Video Marketing &Content Creation.
Introducing AI Edit: Transforming Video Editing with Advanced AI Capabilities
AI Edit is a groundbreaking editing platform that is revolutionizing the way videos are created. With its state-of-the-art features and intelligent automation, AI Edit is set to redefine the landscape of video editing for professionals and enthusiasts alike.
Unleash Your Creativity with Easy Controls
AI Edit brings a new level of simplicity to the editing process. It understands codewords like "c" for cut, "pal" for paste at last, and "m" for move, making editing effortless. Whether you prefer voice commands, keyboard controls, or a combination of both, AI Edit adapts to your style seamlessly.
Revolutionary Storytelling Made Effortless
Prepare to be amazed as AI Edit analyzes every aspect of your videos and audio, automatically organizing them into a cohesive and engaging story. Simply describe the desired sequence, such as "create Goku's entry, Vegeta's epic battle, and Frieza's dramatic entrance," and AI Edit will flawlessly arrange the clips and apply effects, allowing you to focus on the visual impact.
Maximize Your Screen's Potential
AI Edit provides an uncluttered workspace, maximizing your screen's real estate. Unnecessary buttons like cut and paste are intelligently hidden, allowing you to fully immerse yourself in the editing process. This streamlined interface enhances productivity and enables you to create extraordinary videos with ease.
Experience Limitless Editing Capabilities
With AI Edit, the possibilities are endless. Discover a wide range of advanced filters, cinematic effects, and artistic stylizations to enhance the visual appeal of your videos. Take advantage of intelligent audio editing options, such as noise reduction, audio leveling, and automatic suggestions for background music and sound effects. Seamlessly integrate smart transitions to create polished and captivating visuals.
Unlock Your Video's Potential with Intelligent Features
AI Edit empowers you to add professional-quality text, captions, subtitles, and titles to your videos using customizable templates. Correct color imbalances, enhance contrast, and adjust exposure effortlessly with automated color correction. Bring your footage to life with motion tracking, allowing you to incorporate graphics, text, and effects that follow the movement within your video seamlessly.
Effortless Editing at Your Fingertips
AI Edit simplifies complex tasks. Easily detect and replace green screens, stabilize shaky footage for a professional look, and create stunning time-lapse and slow-motion effects to capture attention and heighten the impact. Collaborate seamlessly with team members using AI Edit's collaboration features, and integrate with popular cloud storage platforms for easy access and sharing across devices.
Invest in the Future of Video Editing
Embrace AI Edit and witness the convergence of creativity and innovation. It's capabalities are beyond words that my 3000 word limit isn't enough
Continuous Verification in a Serverless WorldLeon Stigter
At VMware, we define Continuous Verification as:
“A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.”
At #OSSDay, I got a chance to not only talk about what that means for serverless apps and how you can build it into your existing pipelines using tools like GitLab, CloudHealth, Wavefront, and Gotling.
Bitmovin LIVE Tech Talks: Overcoming Encoding Challenges Bitmovin Inc
Learn how modern video workflows must plug and play APIs or compression techniques to catch up with the quality demands of home cinema. In our on-demand webinar, we demonstrate the unique solutions that enable HDR and UHD delivery alongside low bandwidth requirements. A special focus will be on how flexible service design allows for faster time-to-market and minimized cost of delay.
View the full on-demand webinar here: https://go.bitmovin.com/techtalk-live-encoding-solutions?utm_source=slideshare
Mid-level review of server infrastructure that is required and often used with WebRTC, including signaling servers, NAT traversal servers (STUN and TURN), media servers, and WebRTC Gateways.
Presented at the WebRTC Japan Conference in Tokyo.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
More Related Content
Similar to Video Encoding Optimization for Live Video Streaming
Webinar Slides: Cost of Errors on VoD ServicesBitmovin Inc
All varieties of error types and codes can come at a great cost to OTT and streaming organizations. However, there are countless ways to mitigate these costs and prepare your organization for sustained success. First and foremost, one must understand the types of errors a video stream encounters and their monetary effect on a business’s bottom line. Taking a proactive approach with a granular data set for video analytics will remove the ambiguity of which errors are occurring, how often, where, and why.
Learn about the types of technical errors that videos can experience, their monetary effect on your bottom line, and some of the methods to resolve/prevent these errors from occurring.
View the corresponding whitepaper here: https://bit.ly/2Z2Rkhj
Calculate the cost of errors for your service with our free calculator: https://bitmovin.com/demos/cost-of-errors
Original post: http://blog.sezion.com/post/82079538402/what-is-video-encoding.
Video encoding is the process of converting a video so it can work on any type of platform. Each encoded video contains three parts Formats, Codecs, and Bitrates.
With Sezion’s cloud-based automatic video editing and generation platform (API, SDKs or even our no-coding-required tools) the encoding process is included in our solutions.
Divya Jain at AI Frontiers : Video SummarizationAI Frontiers
As video content is becoming mainstream, video summarization is becoming a hot research topic in academia and industry. Video thumbnail generation and summarization has been worked on for years, but deep learning and reinforcement learning is changing the landscape and emerging as the winner for optimal frame selection. Recent advances in GANs are improving the quality, aesthetics and relevancy of the frames to represent the original videos. Come join this session to get an understanding of various challenges and emerging solutions around video summarization.
How to convert video files to audio format using miro video converter by Rema...Denise Fredeluces
How to convert video files to audio format using Miro Video Converter App by Remarkable Virtual Pro
- How to convert video to audio
- Video file conversion
- Conver video to audio
More tutorials at http://remarkablevirtualpro.wordpress.com
ChatGPT 4-powered voiceovers have proven to be incredibly effective, with businesses reporting a substantial boost in brand perception and customer response. And Ai Video Suite has revolutionized and automated this process. ChatGPT 4 + AI Videos, Voiceovers = The Holy Grail of Video Marketing in 2023. Produce Engaging GPT 4 Powered Voiceovers at your fingertips:
Step 1: Create: Use the built-in GPT 4 AI Vox creator to generate a voiceover script. You could also upload and edit an existing script!
Step 2: Generate and Listen: Simply copy and paste the script into the Vox Generator. Select your preferred language, accent, tone, pitch, and volume. In a matter of minutes, your voiceover is ready to download. Optionally add background music from their built-in library or bring your own. Add this vox to your sales videos, Business ads, explainer and product promos and any other videos, audio books etc.
Step 3: Profit: These voiceovers are yours to use in your own business or sell them to clients for big profits! Rinse and repeat!
Activate Ai Video Suite Apps & ENJOY – Enjoy massive monthly savings, traffic, leads, sales, conversions & profits…. And Live The Of Your Dreams. Don’t Let The Competition Outshine You. Embrace Ai Video Suite and unlock a world of limitless possibilities for your business. Your success story begins now, and they’re here to guide you every step of the way. So why wait? Sign Up Today And Experience The Power Of AI-Driven Video Marketing &Content Creation.
Introducing AI Edit: Transforming Video Editing with Advanced AI Capabilities
AI Edit is a groundbreaking editing platform that is revolutionizing the way videos are created. With its state-of-the-art features and intelligent automation, AI Edit is set to redefine the landscape of video editing for professionals and enthusiasts alike.
Unleash Your Creativity with Easy Controls
AI Edit brings a new level of simplicity to the editing process. It understands codewords like "c" for cut, "pal" for paste at last, and "m" for move, making editing effortless. Whether you prefer voice commands, keyboard controls, or a combination of both, AI Edit adapts to your style seamlessly.
Revolutionary Storytelling Made Effortless
Prepare to be amazed as AI Edit analyzes every aspect of your videos and audio, automatically organizing them into a cohesive and engaging story. Simply describe the desired sequence, such as "create Goku's entry, Vegeta's epic battle, and Frieza's dramatic entrance," and AI Edit will flawlessly arrange the clips and apply effects, allowing you to focus on the visual impact.
Maximize Your Screen's Potential
AI Edit provides an uncluttered workspace, maximizing your screen's real estate. Unnecessary buttons like cut and paste are intelligently hidden, allowing you to fully immerse yourself in the editing process. This streamlined interface enhances productivity and enables you to create extraordinary videos with ease.
Experience Limitless Editing Capabilities
With AI Edit, the possibilities are endless. Discover a wide range of advanced filters, cinematic effects, and artistic stylizations to enhance the visual appeal of your videos. Take advantage of intelligent audio editing options, such as noise reduction, audio leveling, and automatic suggestions for background music and sound effects. Seamlessly integrate smart transitions to create polished and captivating visuals.
Unlock Your Video's Potential with Intelligent Features
AI Edit empowers you to add professional-quality text, captions, subtitles, and titles to your videos using customizable templates. Correct color imbalances, enhance contrast, and adjust exposure effortlessly with automated color correction. Bring your footage to life with motion tracking, allowing you to incorporate graphics, text, and effects that follow the movement within your video seamlessly.
Effortless Editing at Your Fingertips
AI Edit simplifies complex tasks. Easily detect and replace green screens, stabilize shaky footage for a professional look, and create stunning time-lapse and slow-motion effects to capture attention and heighten the impact. Collaborate seamlessly with team members using AI Edit's collaboration features, and integrate with popular cloud storage platforms for easy access and sharing across devices.
Invest in the Future of Video Editing
Embrace AI Edit and witness the convergence of creativity and innovation. It's capabalities are beyond words that my 3000 word limit isn't enough
Continuous Verification in a Serverless WorldLeon Stigter
At VMware, we define Continuous Verification as:
“A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.”
At #OSSDay, I got a chance to not only talk about what that means for serverless apps and how you can build it into your existing pipelines using tools like GitLab, CloudHealth, Wavefront, and Gotling.
Bitmovin LIVE Tech Talks: Overcoming Encoding Challenges Bitmovin Inc
Learn how modern video workflows must plug and play APIs or compression techniques to catch up with the quality demands of home cinema. In our on-demand webinar, we demonstrate the unique solutions that enable HDR and UHD delivery alongside low bandwidth requirements. A special focus will be on how flexible service design allows for faster time-to-market and minimized cost of delay.
View the full on-demand webinar here: https://go.bitmovin.com/techtalk-live-encoding-solutions?utm_source=slideshare
Mid-level review of server infrastructure that is required and often used with WebRTC, including signaling servers, NAT traversal servers (STUN and TURN), media servers, and WebRTC Gateways.
Presented at the WebRTC Japan Conference in Tokyo.
Similar to Video Encoding Optimization for Live Video Streaming (20)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.
Optimal Quality and Efficiency in Adaptive Live Streaming with JND-Aware Low ...Alpen-Adria-Universität
In HTTP adaptive live streaming applications, video segments are encoded at a fixed set of bitrate-resolution pairs known as bitrate ladder. Live encoders use the fastest available encoding configuration, referred to as preset, to ensure the minimum possible latency in video encoding. However, an optimized preset and optimized number of CPU threads for each encoding instance may result in (i) increased quality and (ii) efficient CPU utilization while encoding. For low latency live encoders, the encoding speed is expected to be more than or equal to the video framerate. To this light, this paper introduces a Just Noticeable Difference (JND)-Aware Low latency Encoding Scheme (JALE), which uses random forest-based models to jointly determine the optimized encoder preset and thread count for each representation, based on video complexity features, the target encoding speed, the total number of available CPU threads, and the target encoder. Experimental results show that, on average, JALE yield a quality improvement of 1.32 dB PSNR and 5.38 VMAF points with the same bitrate, compared to the fastest preset encoding of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder with eight CPU threads used for each representation. These enhancements are achieved while maintaining the desired encoding speed. Furthermore, on average, JALE results in an overall storage reduction of 72.70%, a reduction in the total number of CPU threads used by 63.83%, and a 37.87% reduction in the overall encoding time, considering a JND of six VMAF points.
In the context of rising environmental concerns, this paper introduces VEEP, an architecture designed to predict energy consumption and CO2 emissions in cloud-based video encoding. VEEP combines video analysis with machine learning (ML)-based energy prediction and real-time carbon intensity, enabling precise estimations of CPU energy usage and CO2 emissions during the encoding process. It is trained on the Video Complexity Dataset (VCD) and encoding results from various AWS EC2 instances. VEEP achieves high accuracy, indicated by an 𝑅2-score of 0.96, a mean absolute error (MAE) of 2.41 × 10−5, and a mean squared error (MSE) of 1.67 × 10−9. An important finding is the potential to reduce emissions by up to 375 times when comparing cloud instances and their locations. These results highlight the importance of considering environmental factors in cloud computing.
In today’s dynamic streaming landscape, where viewers access content on various devices and en- counter fluctuating network conditions, optimizing video delivery for each unique scenario is impera- tive. Video content complexity analysis, content-adaptive video coding, and multi-encoding methods are fundamental for the success of adaptive video streaming, as they serve crucial roles in delivering high-quality video experiences to a diverse audience. Video content complexity analysis allows us to comprehend the video content’s intricacies, such as motion, texture, and detail, providing valuable insights to enhance encoding decisions. By understanding the content’s characteristics, we can effi- ciently allocate bandwidth and encoding resources, thereby improving compression efficiency without compromising quality. Content-adaptive video coding techniques built upon this analysis involve dy- namically adjusting encoding parameters based on the content complexity. This adaptability ensures that the video stream remains visually appealing and artifacts are minimized, even under challenging network conditions. Multi-encoding methods further bolster adaptive streaming by offering faster encoding of multiple representations of the same video at different bitrates. This versatility reduces computational overhead and enables efficient resource allocation on the server side. Collectively, these technologies empower adaptive video streaming to deliver optimal visual quality and uninter- rupted viewing experiences, catering to viewers’ diverse needs and preferences across a wide range of devices and network conditions. Embracing video content complexity analysis, content-adaptive video coding, and multi-encoding methods is essential to meet modern video streaming platforms’ evolving demands and create immersive experiences that captivate and engage audiences. In this light, this dissertation proposes contributions categorized into four classes:
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Vid...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Optimizing Video Streaming for Sustainability and Quality: The Role of Prese...Alpen-Adria-Universität
HTTP Adaptive Streaming (HAS) methods divide a video into smaller segments, encoded at multiple pre-defined bitrates to construct a bitrate ladder. Bitrate ladders are usually optimized per title over several dimensions, such as bitrate, resolution, and framerate. This paper adds a new dimension to the bitrate ladder by considering the energy consumption of the encoding process. Video encoders often have multiple pre-defined presets to balance the trade-off between encoding time, energy consumption, and compression efficiency. Faster presets disable certain coding tools defined by the codec to reduce the encoding time at the cost of reduced compression efficiency. Firstly, this paper evaluates the energy consumption and compression efficiency of different x265 presets for 500 video sequences. Secondly, optimized presets are selected for various representations in a bitrate ladder based on the results to guarantee a minimal drop in video quality while saving energy. Finally, a new per-title model, which optimizes the trade-off between compression efficiency and energy consumption, is proposed. The experimental results show that decreasing the VMAF score by 0.15 and 0.39 while choosing an optimized preset results in encoding energy savings of 70% and 83%, respectively.
Energy-Efficient Multi-Codec Bitrate-Ladder Estimation for Adaptive Video Str...Alpen-Adria-Universität
With the emergence of multiple modern video codecs, streaming service providers are forced to encode, store, and transmit bitrate ladders of multiple codecs separately, consequently suffering from additional energy costs for encoding, storage, and transmission.
To tackle this issue, we introduce an online energy-efficient Multi-Codec Bitrate ladder Estimation scheme (MCBE) for adaptive video streaming applications. In MCBE, quality representations within the bitrate ladder of new-generation codecs (e.g., HEVC, AV1) that lie below the predicted rate-distortion curve of the AVC codec are removed. Moreover, perceptual redundancy between representations of the bitrate ladders of the considered codecs is also minimized based on a Just Noticeable Difference (JND) threshold. Therefore, random forest-based models predict the VMAF of bitrate ladder representations of each codec. In a live streaming session where all clients support the decoding of AVC, HEVC, and AV1, MCBE achieves impressive results, reducing cumulative encoding energy by 56.45%, storage energy usage by 94.99%, and transmission energy usage by 77.61% (considering a JND of six VMAF points). These energy reductions are in comparison to a baseline bitrate ladder encoding based on current industry practice.
Machine Learning Based Resource Utilization Prediction in the Computing Conti...Alpen-Adria-Universität
This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.
The exponential growth of computer game streaming has led to the development of Quality of Experience (QoE) metrics to evaluate user satisfaction and enjoyment during online gameplay and live streaming. Adaptive Bitrate (ABR) streaming is a recent technology that has been suggested to improve QoE. This method enhances the streaming experience, upholds visual quality, minimizes stall events, and boosts player retention. It achieves this by estimating network bottlenecks and selecting appropriate versions of the content that best match the available bandwidth rather than adjusting encoding parameters. To investigate the correlation between quality switching and stall events, a subjective test was conducted separately and comparatively with 71 participants. For more detailed and in-depth research, video games were analyzed with the Video Complexity Analyzer (VCA) tool and divided into three categories of different genres, camera view, and temporal complexity heatmap from the two sets of normal and action scenes. This study seeks to shed light on three unresolved issues pertinent to QoE in game streaming: (i) the user preferences towards quality switching and stall events across varied scenes and games, (ii) the user inclinations towards either a single, prolonged stall event or multiple, shorter stall events, and (iii) the impact of conspicuous quality switching on the user’s QoE. Results from the study provided valuable insights, both qualitatively and quantitatively. The study found a marked preference among users for quality switching over stall events across all types of game streaming, irrespective of the scene’s intensity. Furthermore, it was observed that multiple short-stall events were generally favored over a single long-stall event in streaming first-person shooting games. Interestingly, approximately half of the participants remained oblivious to quality switching during their game viewing sessions, and among those who noticed a change in quality, the alteration did not significantly impact their perceived QoE.
Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, S...Alpen-Adria-Universität
Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent Video-on-Demand (VoD) and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution (e.g., 8K) and/or low- latency VoD and live video streaming pose new challenges to end-to-end (E2E) bandwidth demand and have stringent delay requirements. To do this, video providers typically rely on Content Delivery Networks (CDNs) to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is widely agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network-Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client- based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients.
Over the last recent years, video streaming traffic has become the dominating service over mobile networks. The two main reasons for the growth of video streaming traffic are the improved capabilities of mobile devices and the emergence of HTTP Adaptive Streaming (HAS). Hence, there is a demand for new technologies to cope with the increasing traffic load while improving clients’ Quality of Experience (QoE). The network plays a crucial role in the video streaming process. One of the key technologies on the network side is Multi-access Edge Computing (MEC), which has several key characteristics: computing power, storage, proximity to the clients and access to network and player metrics. Thus, it is possible to deploy mechanisms at the MEC node that assist video streaming.
This thesis investigates how MEC capabilities can be leveraged to support video streaming delivery, specifically to improve the QoE, reduce latency or increase storage and bandwidth savings.
In the last decades, video streaming has been developing significantly. Among cur- rent technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. In HAS, the video is split into temporal segments with the same duration (e.g., 4s), each of which is then encoded into different quality versions and stored at servers. The end user sends requests to the server to retrieve segments with specific quality versions determined by an Adaptive Bitrate (ABR) algorithm for the purpose of adapting the throughput fluctuation. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Content complexity encompasses the increased demands for data, such as high-resolution videos and high frame rates, as well as novel content formats, such as virtual reality (VR) and augmented reality (AR). Time-related requirements include – but are not limited to – start-up delay and end-to-end latency. QoE can be defined as the level of satisfaction or frustration experienced by the user of an application or service. Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side.
VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing In...Alpen-Adria-Universität
The considerable surge in energy consumption within data centers can be attributed to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications are both compute and storage-intensive and account for the majority of today’s internet services. In this work, we designed a video encoding application consisting of codec, bitrate, and resolution set for encoding a video segment. Then, we propose VE-Match, a matching-based method to schedule video encoding applications on both Cloud and Edge resources to optimize costs and energy consumption. Evaluation results on a real computing testbed federated between Amazon Web Services (AWS) EC2 Cloud instances and the Alpen-Adria University (AAU) Edge server reveal that VE-Match achieves lower costs by 17%-78% in the cost-optimized scenarios compared to the energy-optimized and tradeoff between cost and energy. Moreover, VE-Match improves the video encoding energy consumption by 38%-45% and gCO2 emission by up to 80 % in the energy-optimized scenarios compared to the cost-optimized and tradeoff between cost and energy.
Energy Consumption in Video Streaming: Components, Measurements, and StrategiesAlpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to measure energy consumption for each component accurately and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming [1, 2, 3]. These components are classified into three categories [4]: (i) data centers, which include encoding, packaging, and storage on cloud data centers; (ii) networks, which include core network and access networks; and (iii) end-user devices which involve decoding, players, hardware, etc.
In addition to identifying the primary components of video streaming that affect energy consumption, it is important to conduct a comprehensive analysis of the entire video streaming. It is also essential to balance energy optimization and service quality to ensure that energyefficient strategies are implemented without sacrificing the quality of video streaming services.
This talk aims to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Exploring the Energy Consumption of Video Streaming: Components, Challenges, ...Alpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. However, it is essential to note that these advancements come at the cost of energy consumption. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to accurately measure energy consumption for each component and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming. I categorize these components into three categories: (i) data centers, (ii) networks, and (iii) end-user devices.
In this talk, my objective is to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Video Coding Enhancements for HTTP Adaptive Streaming Using Machine LearningAlpen-Adria-Universität
Video is evolving into a crucial tool as daily lives are increasingly centered around visual communication. The demand for better video content is constantly rising, from entertainment to business meetings. The delivery of video content to users is of utmost significance. HTTP adaptive streaming, in which the video content adjusts to the changing network circumstances, has become the de-facto method for delivering internet video.
As video technology continues to advance, it presents a number of challenges, one of which is the large amount of data required to describe a video accurately. To address this issue, it is necessary to have a powerful video encoding tool. Historically, these efforts have relied on hand-crafted tools and heuristics. However, with the recent advances in machine learning, there has been increasing exploration into using these techniques to enhance video coding performance.
This thesis proposes eight contributions that enhance video coding performance for HTTP adaptive streaming using machine learning.
Optimizing QoE and Latency of Live Video Streaming Using Edge Computing a...Alpen-Adria-Universität
Nowadays, HTTP Adaptive Streaming (HAS) has become the de-facto standard for delivering video over the Internet. More users have started generating and delivering high-quality live streams (usually 4K resolution) through popular online streaming platforms, resulting in a rise in live streaming traffic. Typically, the video contents are generated by streamers and watched by many audiences, geographically distributed in various locations far away from the streamers. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users’ requested quality. This dissertation leverages edge computing capabilities and in-network intelligence to design, implement, and evaluate approaches to optimize Quality of Experience (QoE) and end-to-end (E2E) latency of live HAS. In addition, improving transcoding performance and optimizing the cost of running live HAS services and the network’s backhaul utilization are considered. Motivated by the mentioned issue, the dissertation proposes five contributions in two classes: optimizing resource utilization and light-weight transcoding.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A