This document discusses augmented reality in a WebRTC browser. It begins with an introduction to augmented reality and how it differs from virtual reality by blending virtual elements with the real world. It then discusses various methods for rendering augmented reality using computer vision, object recognition, and other techniques. It also discusses the key components needed for a web-based augmented reality solution, including the getUserMedia API, WebGL, and WebRTC. The document provides an overview of WebRTC and examples of using it with APIs like getUserMedia and RTCPeerConnection to enable real-time communications in the browser. It concludes with discussing some JavaScript libraries that can be used to build 3D graphics like Three.js and examples of WebRTC and WebGL
JSARToolKit / LiveChromaKey / LivePointers - Next gen of ARYusuke Kawasaki
I gave a talk about the next generation of AR. Pure ActionScript 3.0 libraries of LiveChromaKey and LivePointers made it at the SparkStudy/09 (Spark project勉強会#09)
Try this at: http://www.kawa.net/text/spark/09/spark.html
This document appears to be notes from a presentation on augmented reality and visual tracking using JavaScript. It discusses accessing the user's camera, playing video, tracking patterns/colors/faces, examples of augmented reality projects, and markerless tracking. The presentation introduces augmented reality and JavaScript APIs for integrating augmented content, then covers visual tracking techniques like color tracking, face detection and real-world examples before discussing markerless tracking and popularity metrics.
El documento proporciona una introducción a ARToolKit, una librería de software para construir aplicaciones de realidad aumentada. Explica los conceptos básicos como el rastreo de posiciones y la superposición de objetos a través de video, y describe cómo funciona a través de la detección de marcadores. También cubre temas como la calibración de cámaras, el desarrollo de aplicaciones simples utilizando las funciones principales de ARToolKit, y proporciona ejemplos de su uso en diferentes sistemas como Android.
The document discusses augmented reality and how to create augmented reality experiences using JavaScript. It provides steps to access a user's camera, play the video stream, and track patterns or objects in the video using techniques like fiducial markers, face detection, and color tracking with libraries like tracking.js. Examples are given for single and multiple object tracking.
This document discusses building HTML5 virtual reality apps using Intel XDK. It explains that HTML5 is compelling for cross-platform VR apps because it is cross-platform, collaborative, and allows editing and testing changes quickly. Intel XDK can be used to build HTML5 and Cordova apps, and Cordova APIs allow accessing device features through JavaScript. The document provides examples of how to implement stereoscopic rendering, head tracking, and accessing device features in HTML5 VR apps.
A presentation at ISTAS13 that looks at the way Reality has been defined through the ages and a proposal for a new Reality Cube model that allows more explicit definitions of this fuzzy term.
This presentation looks at how Augmented Reality and Virtual Reality can be used in an Education/Learning context. It explores the main modes of interaction available and defines the unique and powerful benefits AR & VR deliver for educators. It was presented at Singapore EduTECH 2018.
This document discusses augmented reality in a WebRTC browser. It begins with an introduction to augmented reality and how it differs from virtual reality by blending virtual elements with the real world. It then discusses various methods for rendering augmented reality using computer vision, object recognition, and other techniques. It also discusses the key components needed for a web-based augmented reality solution, including the getUserMedia API, WebGL, and WebRTC. The document provides an overview of WebRTC and examples of using it with APIs like getUserMedia and RTCPeerConnection to enable real-time communications in the browser. It concludes with discussing some JavaScript libraries that can be used to build 3D graphics like Three.js and examples of WebRTC and WebGL
JSARToolKit / LiveChromaKey / LivePointers - Next gen of ARYusuke Kawasaki
I gave a talk about the next generation of AR. Pure ActionScript 3.0 libraries of LiveChromaKey and LivePointers made it at the SparkStudy/09 (Spark project勉強会#09)
Try this at: http://www.kawa.net/text/spark/09/spark.html
This document appears to be notes from a presentation on augmented reality and visual tracking using JavaScript. It discusses accessing the user's camera, playing video, tracking patterns/colors/faces, examples of augmented reality projects, and markerless tracking. The presentation introduces augmented reality and JavaScript APIs for integrating augmented content, then covers visual tracking techniques like color tracking, face detection and real-world examples before discussing markerless tracking and popularity metrics.
El documento proporciona una introducción a ARToolKit, una librería de software para construir aplicaciones de realidad aumentada. Explica los conceptos básicos como el rastreo de posiciones y la superposición de objetos a través de video, y describe cómo funciona a través de la detección de marcadores. También cubre temas como la calibración de cámaras, el desarrollo de aplicaciones simples utilizando las funciones principales de ARToolKit, y proporciona ejemplos de su uso en diferentes sistemas como Android.
The document discusses augmented reality and how to create augmented reality experiences using JavaScript. It provides steps to access a user's camera, play the video stream, and track patterns or objects in the video using techniques like fiducial markers, face detection, and color tracking with libraries like tracking.js. Examples are given for single and multiple object tracking.
This document discusses building HTML5 virtual reality apps using Intel XDK. It explains that HTML5 is compelling for cross-platform VR apps because it is cross-platform, collaborative, and allows editing and testing changes quickly. Intel XDK can be used to build HTML5 and Cordova apps, and Cordova APIs allow accessing device features through JavaScript. The document provides examples of how to implement stereoscopic rendering, head tracking, and accessing device features in HTML5 VR apps.
A presentation at ISTAS13 that looks at the way Reality has been defined through the ages and a proposal for a new Reality Cube model that allows more explicit definitions of this fuzzy term.
This presentation looks at how Augmented Reality and Virtual Reality can be used in an Education/Learning context. It explores the main modes of interaction available and defines the unique and powerful benefits AR & VR deliver for educators. It was presented at Singapore EduTECH 2018.
This is our presentation from the Immersive Sydney #WebXRWeek event. This provides an overview of web based Mixed Reality and then dives into the specifics of the new #WebXR API. It includes market statistics and information on key release dates. It also includes links to #WebXR demos and other background information.
The Extended Reality Landscape #Reality17Rob Manson
This is our presentation from the WebDirections #Reality17 event. It provides an overview of the Mixed/Extended Reality landscape and looks at what's coming next. It concludes by showing where #WebAR is up to right now and what's possible in your browser today.
ISMAR17 - Augmenting, Mixing & Extending Reality on the webRob Manson
This presentation was part of the ISMAR17 Workshop on "Standards for Mixed & Augmented Reality" in Nantes, France (https://arstandardsworkshop17.wordpress.com/program/). It provides an overview of the latest developments in web based Mixed Reality including the newly released Natural Feature Tracking from awe.media (see the intro video here https://youtu.be/5_HWgrcWWts). It takes a look at developments & APIs that are currently forming. It takes a look at what is still needed and how these API gaps may be dealt with. And it explores what is currently playing out in the market and how this is likely to evolve in the near-to-midterm future.
Immersive Computing @ #YOWConnected 2017Rob Manson
"Immersive" is the key defining factor of the next wave of computing. With each computing revolution the distance between the user and the computing experience has shrunk - with today's modern mobile devices making computing Pervasive. This next step is truly user-centered and puts you right at the heart of a deeply Immersive Experience. The key question that faces us now is "How do we get there from here?". Our near future seems to inevitably involve hyper-connected wearable and then embed-able computer displays. But how will we all migrate from today's mobile devices to this amazing new nerdvana?
This session takes a detailed look at what makes "Immersive-ness" so important. What standards and market forces are driving this evolution. And how you can navigate your way through all the technological change this will bring.
An intro to our new #Immersive event. "Immersive" is the key defining feature of the latest technology revolution. This presentation includes an overview of #MixedReality with recent updates in #VirtualReality and #AugmentedReality
Computer Vision - now working in over 2 Billion Web Browsers!Rob Manson
This was presented at Augmented World Expo in Santa Clara (#AWE2017). A video of the live Natural Feature Tracking demo will be uploaded and linked from here soon.
The key benefit of using AR in your web browser is how quick and easy it is to share. You can send a single web link through social media or email and the recipient can just tap on the link and it works. But up until recently this has not included Computer Vision based AR for a number of technical and market reasons. This presentation will position the web browser in the overall context of Computer Vision history, and we'll look at how this has evolved through developments including jsartoolkit.js, tracking.js and AR.js. We'll then dive deeper into the latest developments to show how OpenCV performs running in the browser and how this compares to native applications. This deep dive will compare the different feature detection/extraction algorithms and how they perform on some well known image data sets. The session will conclude with demos that show how this all works right now in over 2 Billion capable web browsers.
Web Standards for AR workshop at ISMAR13Rob Manson
This work was presented at the Open Standards session at the IEEE ISMAR 2013 event. It provides a detailed overview and working examples that show exactly where Augmented Reality and Computer Vision are up to on the Web Platform.
This presentation also provides a detailed description of how to define exactly what the Augmented Web is.
An overview of how Web Standards and Augmented Reality are combining to create a new Augmented Web.
This presentation was part of the "Open and Interoperable AR" session at #AWE2013.
This market survey reviews how Web Standards have been adopted within the AR market across a wide range of commercial products and research projects. It aims to capture the state of the market at the end of 2012. This review provides a consolidated view of the use of Web Technologies within the AR industry and extrapolates how this trend is likely to evolve over the next 6-12 months.
This review contributes to the ongoing development of AR Standards by providing system architects, implementors and marketers a clear and concise summary of this key intersection between AR and the Web.
Shakespeare said “All the world is a stage” and with pervasive computing this has never been more true. As Mark Weiser predicted our computers and sensors are rapidly dissolving into the background of the world around us. But if this technology is truly pervasive then where does the User Interface go? This question shows why Augmented Reality is much more than just the latest novelty. It is part of a structural change in the way we use and interact with the network. This presentation will look at what this means for our sense of where and even who we are.
Shakespeare said "All the world is a stage" and with pervasive computing this has never been more true. As Mark Weiser predicted our computers and sensors are rapidly dissolving into the background of the world around us. But if this technology is truly pervasive then where does the User Interface go? This question shows why Augmented Reality is much more than just the latest novelty. It is part of a structural change in the way we use and interact with the network. This presentation will look at the sensors around us, the standards that are developing and the challenges that lie ahead.
New sensor based Web Standards developments have punched a hole in the web that is letting the real world leak into the browser. The getUserMedia API now lets us access cameras and microphones and JSARToolkit and javascript based Natural Feature Tracking like the examples from ICG Graz University have shown that browsers can now be taught to perceive the world around them. Combine this with the <canvas> and WebGL and you have a real working model for a Web Standards based Augmented Reality.
On top of this we also have OGCs Sensor Web Enablement and new developments like the Sensor API and the rapid spread of networked sensors and wireless Arduino-ised devices. Massively distributed dynamic immersive visualisation is now the new structural form for the modern web.
Augmented Reality lets you peel away the blinkers from your real world eyes to see the rich data and information that exists all around you. But up until now it has relied largely on proprietary tools and standards. Finally, we’re close to being able to augment our world using web technologies. Soon this will be a common part of the web browsing and mobile device experience. Now is the time to look at these future trends and the state of a specific list of API standardisation activities and the forces shaping them. We’ll also look at the current obstacles, risks and issues to explore what may prevent this landscape from evolving as it appears it will.
This presentation aims to document the AR standardisation efforts over the last few years as well as what’s possible right now and in the near future from a distinctly web-based perspective.
What does it mean when your apps can see, hear and feel? Digital sensors are flooding through our daily life. Mobile devices have microphones, cameras, accelerometers, digital compasses and now Near Field Communication (NFC) chips and scanners. But this is all just the very start. These small digital sensors are shrinking even further and just as Mark Weiser predicted they are soaking into the world around you.
In this session Rob will explore how you can use these new streams of sensor data to create innovative and dynamic new user experiences. He’ll examine, from an experience point of view, what the challenges and constraints of sensor driven apps are and how you can deal with this deeply technical domain that covers issues such as sensor fusion, latency, calibration and newly extended mental models.
But if that all sounds a bit dry and technical for you, don’t panic! Rob’s focus here is on how to create experiences using these technologies rather than the deep technical aspects themselves. However, he will help you get a clear overview of the work from the W3C Device API and Web Real Time Communication working groups, the ARStandards.org community and other related research activities. He’ll present tangible demonstrations that show how these new streams of “digital awareness” data can now be integrated into the experiences you create to literally bring your apps to life. This will leave you re-thinking your current projects and asking “Is that sense-able?”.
Rob will also explore where we are headed with all of this and how to plan for this mind expanding journey. This is far from just a technical discussion. It will provide an introduction to the ethical and moral issues created by these sensors. Issues that we will all need to address in our own lives providing yet another aspect to the key question “Is that sensible?”
Web3 refers to the next stage of the World Wide Web that allows information to be linked to real world objects and locations using modern web and device capabilities. It builds on previous stages like Web1.0 for basic HTML pages and Web2.0 for social platforms by incorporating real-world sensors and computing into the web experience. Rapid adoption of web3 has been hindered by fragmentation across browsers and devices in supporting relevant HTML5 features, but a compliance test can determine if your device already supports core web3 capabilities like video, geo-location, 3D and motion sensors.
An AR mobile app called streetARtAPP gained traction without advertising, attracting over 25,000 unique users in 162 countries in its first month. It was promoted through blog posts and being featured as a Layar layer. While iPhones drove initial growth, Android now accounts for most Layar browser users. Creating apps and embedding the Layar player provides the deepest user engagement. The complexity of supporting multiple devices underscores the need for web and mobile apps alongside AR browsers.
The document announces an Augmented Reality Meetup event in Sydney from March 2011. The event was sponsored by ARDevCamp Sydney and included discussions on AR standards, AR gaming demonstrations, and networking over drinks. Attendees were encouraged to use the #ARSyd tag to share and join the Sydney ARDevCamp in June.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
This is our presentation from the Immersive Sydney #WebXRWeek event. This provides an overview of web based Mixed Reality and then dives into the specifics of the new #WebXR API. It includes market statistics and information on key release dates. It also includes links to #WebXR demos and other background information.
The Extended Reality Landscape #Reality17Rob Manson
This is our presentation from the WebDirections #Reality17 event. It provides an overview of the Mixed/Extended Reality landscape and looks at what's coming next. It concludes by showing where #WebAR is up to right now and what's possible in your browser today.
ISMAR17 - Augmenting, Mixing & Extending Reality on the webRob Manson
This presentation was part of the ISMAR17 Workshop on "Standards for Mixed & Augmented Reality" in Nantes, France (https://arstandardsworkshop17.wordpress.com/program/). It provides an overview of the latest developments in web based Mixed Reality including the newly released Natural Feature Tracking from awe.media (see the intro video here https://youtu.be/5_HWgrcWWts). It takes a look at developments & APIs that are currently forming. It takes a look at what is still needed and how these API gaps may be dealt with. And it explores what is currently playing out in the market and how this is likely to evolve in the near-to-midterm future.
Immersive Computing @ #YOWConnected 2017Rob Manson
"Immersive" is the key defining factor of the next wave of computing. With each computing revolution the distance between the user and the computing experience has shrunk - with today's modern mobile devices making computing Pervasive. This next step is truly user-centered and puts you right at the heart of a deeply Immersive Experience. The key question that faces us now is "How do we get there from here?". Our near future seems to inevitably involve hyper-connected wearable and then embed-able computer displays. But how will we all migrate from today's mobile devices to this amazing new nerdvana?
This session takes a detailed look at what makes "Immersive-ness" so important. What standards and market forces are driving this evolution. And how you can navigate your way through all the technological change this will bring.
An intro to our new #Immersive event. "Immersive" is the key defining feature of the latest technology revolution. This presentation includes an overview of #MixedReality with recent updates in #VirtualReality and #AugmentedReality
Computer Vision - now working in over 2 Billion Web Browsers!Rob Manson
This was presented at Augmented World Expo in Santa Clara (#AWE2017). A video of the live Natural Feature Tracking demo will be uploaded and linked from here soon.
The key benefit of using AR in your web browser is how quick and easy it is to share. You can send a single web link through social media or email and the recipient can just tap on the link and it works. But up until recently this has not included Computer Vision based AR for a number of technical and market reasons. This presentation will position the web browser in the overall context of Computer Vision history, and we'll look at how this has evolved through developments including jsartoolkit.js, tracking.js and AR.js. We'll then dive deeper into the latest developments to show how OpenCV performs running in the browser and how this compares to native applications. This deep dive will compare the different feature detection/extraction algorithms and how they perform on some well known image data sets. The session will conclude with demos that show how this all works right now in over 2 Billion capable web browsers.
Web Standards for AR workshop at ISMAR13Rob Manson
This work was presented at the Open Standards session at the IEEE ISMAR 2013 event. It provides a detailed overview and working examples that show exactly where Augmented Reality and Computer Vision are up to on the Web Platform.
This presentation also provides a detailed description of how to define exactly what the Augmented Web is.
An overview of how Web Standards and Augmented Reality are combining to create a new Augmented Web.
This presentation was part of the "Open and Interoperable AR" session at #AWE2013.
This market survey reviews how Web Standards have been adopted within the AR market across a wide range of commercial products and research projects. It aims to capture the state of the market at the end of 2012. This review provides a consolidated view of the use of Web Technologies within the AR industry and extrapolates how this trend is likely to evolve over the next 6-12 months.
This review contributes to the ongoing development of AR Standards by providing system architects, implementors and marketers a clear and concise summary of this key intersection between AR and the Web.
Shakespeare said “All the world is a stage” and with pervasive computing this has never been more true. As Mark Weiser predicted our computers and sensors are rapidly dissolving into the background of the world around us. But if this technology is truly pervasive then where does the User Interface go? This question shows why Augmented Reality is much more than just the latest novelty. It is part of a structural change in the way we use and interact with the network. This presentation will look at what this means for our sense of where and even who we are.
Shakespeare said "All the world is a stage" and with pervasive computing this has never been more true. As Mark Weiser predicted our computers and sensors are rapidly dissolving into the background of the world around us. But if this technology is truly pervasive then where does the User Interface go? This question shows why Augmented Reality is much more than just the latest novelty. It is part of a structural change in the way we use and interact with the network. This presentation will look at the sensors around us, the standards that are developing and the challenges that lie ahead.
New sensor based Web Standards developments have punched a hole in the web that is letting the real world leak into the browser. The getUserMedia API now lets us access cameras and microphones and JSARToolkit and javascript based Natural Feature Tracking like the examples from ICG Graz University have shown that browsers can now be taught to perceive the world around them. Combine this with the <canvas> and WebGL and you have a real working model for a Web Standards based Augmented Reality.
On top of this we also have OGCs Sensor Web Enablement and new developments like the Sensor API and the rapid spread of networked sensors and wireless Arduino-ised devices. Massively distributed dynamic immersive visualisation is now the new structural form for the modern web.
Augmented Reality lets you peel away the blinkers from your real world eyes to see the rich data and information that exists all around you. But up until now it has relied largely on proprietary tools and standards. Finally, we’re close to being able to augment our world using web technologies. Soon this will be a common part of the web browsing and mobile device experience. Now is the time to look at these future trends and the state of a specific list of API standardisation activities and the forces shaping them. We’ll also look at the current obstacles, risks and issues to explore what may prevent this landscape from evolving as it appears it will.
This presentation aims to document the AR standardisation efforts over the last few years as well as what’s possible right now and in the near future from a distinctly web-based perspective.
What does it mean when your apps can see, hear and feel? Digital sensors are flooding through our daily life. Mobile devices have microphones, cameras, accelerometers, digital compasses and now Near Field Communication (NFC) chips and scanners. But this is all just the very start. These small digital sensors are shrinking even further and just as Mark Weiser predicted they are soaking into the world around you.
In this session Rob will explore how you can use these new streams of sensor data to create innovative and dynamic new user experiences. He’ll examine, from an experience point of view, what the challenges and constraints of sensor driven apps are and how you can deal with this deeply technical domain that covers issues such as sensor fusion, latency, calibration and newly extended mental models.
But if that all sounds a bit dry and technical for you, don’t panic! Rob’s focus here is on how to create experiences using these technologies rather than the deep technical aspects themselves. However, he will help you get a clear overview of the work from the W3C Device API and Web Real Time Communication working groups, the ARStandards.org community and other related research activities. He’ll present tangible demonstrations that show how these new streams of “digital awareness” data can now be integrated into the experiences you create to literally bring your apps to life. This will leave you re-thinking your current projects and asking “Is that sense-able?”.
Rob will also explore where we are headed with all of this and how to plan for this mind expanding journey. This is far from just a technical discussion. It will provide an introduction to the ethical and moral issues created by these sensors. Issues that we will all need to address in our own lives providing yet another aspect to the key question “Is that sensible?”
Web3 refers to the next stage of the World Wide Web that allows information to be linked to real world objects and locations using modern web and device capabilities. It builds on previous stages like Web1.0 for basic HTML pages and Web2.0 for social platforms by incorporating real-world sensors and computing into the web experience. Rapid adoption of web3 has been hindered by fragmentation across browsers and devices in supporting relevant HTML5 features, but a compliance test can determine if your device already supports core web3 capabilities like video, geo-location, 3D and motion sensors.
An AR mobile app called streetARtAPP gained traction without advertising, attracting over 25,000 unique users in 162 countries in its first month. It was promoted through blog posts and being featured as a Layar layer. While iPhones drove initial growth, Android now accounts for most Layar browser users. Creating apps and embedding the Layar player provides the deepest user engagement. The complexity of supporting multiple devices underscores the need for web and mobile apps alongside AR browsers.
The document announces an Augmented Reality Meetup event in Sydney from March 2011. The event was sponsored by ARDevCamp Sydney and included discussions on AR standards, AR gaming demonstrations, and networking over drinks. Attendees were encouraged to use the #ARSyd tag to share and join the Sydney ARDevCamp in June.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
2. Who am I?
Rob Manson @nambor
CEO of MOB Labs the creators of buildAR.com
Chair of the W3C's Augmented Web Community Group
Invited Expert with the ISO, W3C & the Khronos Group
Co-Founder of ARStandards.org
Author of “Getting started with WebRTC” (available July)
https://buildAR.com
3. ARStandards Workshop in Seoul 2010
Patterns of Interest Proposal – Rob Manson
https://buildAR.com
6. https://buildAR.com
Current local image processing pipeline
using the Media Capture & Streams API
1. Setup <video> element in the DOM
a. declaratively then via getElementById or similar
b. createElement(“video”) then appendChild()
2. Access the camera
a. getUserMedia()
NOTE: Currently can only select the default camera
3. Pipe camera stream into <video>
a. video.src = stream
4. Setup <canvas> element in the DOM
a. declaratively then via getElementById or similar
b. createElement(“canvas”) then appendChild()
5. Get 2D drawing context
a. canvas.getContext('2d');
6. Draw <video> frame onto <canvas>
a. canvas.drawImage(video, top, left, width, height);
7. Get RGBA Uint8ClampedArray of the pixels
a. context.getImageData(top, left, width, height).data;
8. Burn CPU (not GPU) cycles
a. for (blah) { for (blah) { … } … }
NOTE: May also integrate other sensor data here
9. Render results
a. using HTML/JS/CSS
b. using another <canvas> and drawImage()
c. using WebGL
d. a combination of all
9. https://buildAR.com
What's in the near future?
Integrating WebRTC and Visual Search
Using WebGL/GLSL to utilise GPU parallelism
Khronos Group's OpenVX
Khronos Group's Camera Working Group
Lots more demos to share! 8)