Unity developed a new architecture that improves the support for existing and future augmented reality (AR) and virtual reality (VR) platforms. Learn about the technology under the hood, the consequent benefits, and improvements to the platform, and how it impacts your workflows in creating AR/VR experiences.
Speakers: Mike Durand, Matt Fuad - Unity
Watch the session on Youtube: https://youtu.be/Stqk1GxlSK0
Inside Flutter: Widgets, Elements, and RenderObjectsHansol Lee
This slide explains how Flutter's Widgets, Elements, and RenderObjects work together in easy language. Once you understand how flutter works inside, you would have much more tools in your belt to make stunning Flutter apps.
An online training course run by the FIWARE Foundation in conjunction with the i4Trust project. The core part of this virtual training camp (21-24 June 2021) covered all the necessary skills to develop smart solutions powered by FIWARE. It introduces the basis of Digital Twin programming using linked data concepts - JSON-LD and NGSI-LD and combines these with common smart data models for the sharing and augmentation of context data.
In addition, it covers the supplementary FIWARE technologies used to implement the common functions typically required when architecting a complete smart solution: Identity and Access Management (IAM) functions to secure access to digital twin data and functions enabling the interface with IoT and 3rd systems, or the connection with different tools for processing and monitoring current and historical big data.
This 12-hour online training course can be used to obtain a good understanding of FIWARE and NGSI Interfaces and form the basis of studying for the FIWARE expert certification.
Extending this core part, the virtual training camp adds introductory and deep-dive sessions on how FIWARE and iSHARE technologies, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for the creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the creation of innovative services based on data sharing. In addition, SMEs and Digital Innovation Hubs (DIHs) that go through this complete training and are located in countries eligible under Horizon 2020 will be equipped with the necessary know-how to apply to the recently launched i4Trust Open Call.
Inside Flutter: Widgets, Elements, and RenderObjectsHansol Lee
This slide explains how Flutter's Widgets, Elements, and RenderObjects work together in easy language. Once you understand how flutter works inside, you would have much more tools in your belt to make stunning Flutter apps.
An online training course run by the FIWARE Foundation in conjunction with the i4Trust project. The core part of this virtual training camp (21-24 June 2021) covered all the necessary skills to develop smart solutions powered by FIWARE. It introduces the basis of Digital Twin programming using linked data concepts - JSON-LD and NGSI-LD and combines these with common smart data models for the sharing and augmentation of context data.
In addition, it covers the supplementary FIWARE technologies used to implement the common functions typically required when architecting a complete smart solution: Identity and Access Management (IAM) functions to secure access to digital twin data and functions enabling the interface with IoT and 3rd systems, or the connection with different tools for processing and monitoring current and historical big data.
This 12-hour online training course can be used to obtain a good understanding of FIWARE and NGSI Interfaces and form the basis of studying for the FIWARE expert certification.
Extending this core part, the virtual training camp adds introductory and deep-dive sessions on how FIWARE and iSHARE technologies, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for the creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the creation of innovative services based on data sharing. In addition, SMEs and Digital Innovation Hubs (DIHs) that go through this complete training and are located in countries eligible under Horizon 2020 will be equipped with the necessary know-how to apply to the recently launched i4Trust Open Call.
The Java ecosystem is very broad, with different technologies including Java SE, Java EE/Jakarta EE, Spring, numerous application servers, and other frameworks. Wherever you are in Java, Azure supports your workload and process with an abundance of choice – from IaaS to fully managed services. You can run any application architecture, from monoliths, to containerized monoliths, all the way to completely microservices based apps.
We see three broad patterns for running Java applications in the cloud, depending on how much control or productivity you need.
The first is lift and shift with Virtual Machines:
Virtual machines provide the most flexibility, control and visibility while moving to the cloud, especially for initial lift and shift of Java workloads. Azure provides a variety of Java focused VM images and solutions templates in the Azure Marketplace to get you up and running quickly.
The second is modernization using containers:
Containers provide portability, flexibility, scalability, manageability, repeatability, and predictability.
Azure provides best of breed support for Docker and Kubernetes, especially through the Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift.
Finally, Azure has the most managed hosting options for Java applications of any major cloud platform with fully managed PaaS for Spring, Tomcat, and JBoss EAP:
Managed services offer ease-of-use, ease-of-management, productivity, and lower total cost of ownership.
You can focus on building your applications, not managing infrastructure.
All of this is supported by managed databases and DevOps tooling:
Use fully managed SQL and NoSQL databases, including PostgreSQL, MySQL, Cosmos DB, and SQL.
Keep using the tools you love, with plugins for IntelliJ and Eclipse, integrations with a variety of DevOps tools like Maven, Gradle, Jenkins, and GitHub.
What do you mean by “API as a Product”?Nordic APIs
You may have heard the term “API Product.” But what does it mean? In this talk I will introduce the concept and explain the benefits and challenges of transforming your organization to view your APIs as measurable products that expose your companies capabilities creating agility, autonomy, and acceleration. Traditional product manufacturers create new product and launch them into the marketplace and then measure value; we will teach you to view your APIs in the same way. Concepts covered in this presentation will be designing APIs with Design Thinking, funding your product, building teams, marketing your API, managing your marketplace, and measuring success.
These slides describe rules for running Architectural Katas, essential for running architectural katas. This was created as part of Software Architecture Meetup January 2019 session.
In this session Stefan will go deep into the security aspects of Flux v2. We’ll start by explaining the Flux authorization model and how it relates to Kubernetes RBAC and account impersonation. Then we’ll compare the soft and hard multitenancy models from a GitOps perspective. We’ll explore the configuration options on how platform admins can lockdown Flux on multitenant environments and how they can onboard tenants onto clusters using the Flux CLI and Git. Finally we’ll talk about the Flux roadmap for 2022.
Overview of the basics of modules, plug-ins and projects in UE4, and a deep dive into integrating third-party dependencies. Presented at MIGS 2016 in Montreal.
Animation(Legacy)やMecanim、TimelineやSimple Animationといった機能の考え方や使い所、そしてPlayable APIやAnimation C# Jobs、Kinematicaなどの新機能が"どういった機能"で"何ができるようになるのか"といった情報を整理して紹介します。
このスライドは、TECH x GAME COLLEGE #5で紹介したスライドを少し手直ししたものとなります。
https://techxgamecollege.connpass.com/event/99824/
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/khronos-standard-apis-for-accelerating-vision-and-inferencing-a-presentation-from-the-khronos-group/
Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Khronos Standard APIs for Accelerating Vision and Inferencing” tutorial at the September 2020 Embedded Vision Summit.
The landscape of processors and tools for accelerating inferencing and vision applications continues to evolve rapidly. Khronos standards, such as OpenCL, OpenVX, SYCL and NNEF, play an increasingly central role in connecting application developers to the latest silicon—productively, efficiently and portably.
In this talk, Trevett provides an overview and the latest updates on Khronos standards relevant for machine learning and computer vision, and previews how they are likely to evolve in the future.
Submitted for the partial fulfillment of Bachelor's in Technology by submitting a Mini Project Completely built from scratch for submission under Dot Net mini Project External.
The Java ecosystem is very broad, with different technologies including Java SE, Java EE/Jakarta EE, Spring, numerous application servers, and other frameworks. Wherever you are in Java, Azure supports your workload and process with an abundance of choice – from IaaS to fully managed services. You can run any application architecture, from monoliths, to containerized monoliths, all the way to completely microservices based apps.
We see three broad patterns for running Java applications in the cloud, depending on how much control or productivity you need.
The first is lift and shift with Virtual Machines:
Virtual machines provide the most flexibility, control and visibility while moving to the cloud, especially for initial lift and shift of Java workloads. Azure provides a variety of Java focused VM images and solutions templates in the Azure Marketplace to get you up and running quickly.
The second is modernization using containers:
Containers provide portability, flexibility, scalability, manageability, repeatability, and predictability.
Azure provides best of breed support for Docker and Kubernetes, especially through the Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift.
Finally, Azure has the most managed hosting options for Java applications of any major cloud platform with fully managed PaaS for Spring, Tomcat, and JBoss EAP:
Managed services offer ease-of-use, ease-of-management, productivity, and lower total cost of ownership.
You can focus on building your applications, not managing infrastructure.
All of this is supported by managed databases and DevOps tooling:
Use fully managed SQL and NoSQL databases, including PostgreSQL, MySQL, Cosmos DB, and SQL.
Keep using the tools you love, with plugins for IntelliJ and Eclipse, integrations with a variety of DevOps tools like Maven, Gradle, Jenkins, and GitHub.
What do you mean by “API as a Product”?Nordic APIs
You may have heard the term “API Product.” But what does it mean? In this talk I will introduce the concept and explain the benefits and challenges of transforming your organization to view your APIs as measurable products that expose your companies capabilities creating agility, autonomy, and acceleration. Traditional product manufacturers create new product and launch them into the marketplace and then measure value; we will teach you to view your APIs in the same way. Concepts covered in this presentation will be designing APIs with Design Thinking, funding your product, building teams, marketing your API, managing your marketplace, and measuring success.
These slides describe rules for running Architectural Katas, essential for running architectural katas. This was created as part of Software Architecture Meetup January 2019 session.
In this session Stefan will go deep into the security aspects of Flux v2. We’ll start by explaining the Flux authorization model and how it relates to Kubernetes RBAC and account impersonation. Then we’ll compare the soft and hard multitenancy models from a GitOps perspective. We’ll explore the configuration options on how platform admins can lockdown Flux on multitenant environments and how they can onboard tenants onto clusters using the Flux CLI and Git. Finally we’ll talk about the Flux roadmap for 2022.
Overview of the basics of modules, plug-ins and projects in UE4, and a deep dive into integrating third-party dependencies. Presented at MIGS 2016 in Montreal.
Animation(Legacy)やMecanim、TimelineやSimple Animationといった機能の考え方や使い所、そしてPlayable APIやAnimation C# Jobs、Kinematicaなどの新機能が"どういった機能"で"何ができるようになるのか"といった情報を整理して紹介します。
このスライドは、TECH x GAME COLLEGE #5で紹介したスライドを少し手直ししたものとなります。
https://techxgamecollege.connpass.com/event/99824/
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/khronos-standard-apis-for-accelerating-vision-and-inferencing-a-presentation-from-the-khronos-group/
Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Khronos Standard APIs for Accelerating Vision and Inferencing” tutorial at the September 2020 Embedded Vision Summit.
The landscape of processors and tools for accelerating inferencing and vision applications continues to evolve rapidly. Khronos standards, such as OpenCL, OpenVX, SYCL and NNEF, play an increasingly central role in connecting application developers to the latest silicon—productively, efficiently and portably.
In this talk, Trevett provides an overview and the latest updates on Khronos standards relevant for machine learning and computer vision, and previews how they are likely to evolve in the future.
Submitted for the partial fulfillment of Bachelor's in Technology by submitting a Mini Project Completely built from scratch for submission under Dot Net mini Project External.
KCD Munich - Cloud Native Platform Dilemma - Turning it into an OpportunityAndreas Grabner
This talk was given at KCD Munich - July 17 2023
Abstract
“Kubernetes is a platform for building platforms. It’s a better place to start: not the endgame”, tweeted by Kelsey Hightower in November 2017. 6 years later the Cloud Native Community is faced with 159 different CNCF projects to choose from. Entering CNCF can be overwhelming!
Cloud Native Platform Engineering with white papers, best practices and reference architectures are here to convert this dilemma into an opportunity. Internal Developer Platforms (IDP) are being built as we speak enabling organizations to harness the power of Kubernetes as a self-service platform.
Join this talk with Andreas Grabner, CNCF Ambassador, and get some insights on tooling, use cases and best practices so we can all fulfill the idea that Kelsey put out years ago.
Entenda as recentes novidades e mudanças anunciadas pela Microsoft com relacao ao futuro do .NET Framework e sua nova arquitetura e quais os cenarios que ele contempla. Detalhes também sobre os novos cenarios WEB habilitados
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-trevett
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: Options and Trade-offs" tutorial at the May 2018 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to rapidly evolve. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT. Inferencing engines are being fed via neural network file formats such as NNEF and ONNX. Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others, such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project?
In this presentation, Trevett presents the current landscape of APIs, file formats and SDKs for inferencing and vision acceleration, explaining where each one fits in the development flow. Trevett also highlights where these APIs overlap and where they complement each other, and previews some of the latest developments in these APIs.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2014-embedded-vision-summit-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of Khronos and Vice President at NVIDIA, presents the "OpenVX Hardware Acceleration API for Embedded Vision Applications and Libraries" tutorial at the May 2014 Embedded Vision Summit.
This presentation introduces OpenVX, a new application programming interface (API) from the Khronos Group. OpenVX enables performance and power optimized vision algorithms for use cases such as face, body and gesture tracking, smart video surveillance, automatic driver assistance systems, object and scene reconstruction, augmented reality, visual inspection, robotics and more.
OpenVX enables significant implementation innovation while maintaining a consistent API for developers. OpenVX can be used directly by applications or to accelerate higher-level middleware with platform portability. OpenVX complements the popular OpenCV open source vision library that is often used for application prototyping.
How Igalia Is Driving Innovation In Embedded Systems With Open Source Technol...Igalia
Igalia is an Open Source consultancy that offers in-depth knowledge across the software stack and a broad selection of cutting-edge industries.
In this talk, we will present some of the projects and solutions that Igalia has worked on for various embedded devices, such as smart TVs, set-top boxes, in-vehicle infotainment systems, and home automation devices. We will showcase some of the open source technologies that Igalia has contributed to or created, such as WPE WebKit, Chromium, Servo, GStreamer, Mesa or the Linux Kernel.
Finally, we will also share our experience and vision on how open source technologies can enable innovation and performance in embedded systems.
(c) Embedded Open Source Summit 2023
June 28 2023
Prague, Czech Republic
https://eoss2023.sched.com/
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, presents the "Vision API Maze: Options and Trade-offs" tutorial at the May 2016 Embedded Vision Summit.
It’s been a busy year in the world of hardware acceleration APIs. Many industry-standard APIs, such as OpenCL and OpenVX, have been upgraded, and the industry has begun to adopt the new generation of low-level, explicit GPU APIs, such as Vulkan, that tightly integrate graphics and compute. Some of these APIs, like OpenVX and OpenCV, are vision-specific, while others, like OpenCL and Vulkan, are general-purpose. Some, like CUDA and Renderscript, are supplier-specific, while others are open standards that any supplier can adopt. Which ones should you use for your project?
In this presentation, Neil Trevett, President of the Khronos Group standards organization, updates the landscape of APIs for vision software development, explaining where each one fits in the development flow. Neil also highlights where these APIs overlap and where they complement each other, and previews some of the latest developments in these APIs.
Similar to Unity XR platform has a new architecture – Unite Copenhagen 2019 (20)
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
The Tipping Point: How Virtual Experiences Are Transforming Global IndustriesUnity Technologies
When it comes to emerging technology, Forrester found that “94% of those who have implemented real-time 3D are expanding their investment.”
Wonder why? Learn more in this webinar featuring guest speaker Paul Miller, a principal analyst at Forrester. He covers the key findings of a commissioned study conducted by Forrester Consulting on behalf of Unity, published in March 2020.
Learn more: https://on.unity.com/2Yz49kg
Watch the webinar: https://on.unity.com/3aYGlsF
Take a peek at the slide from the second installment of our 2020 roadmap: Live Games.
Watch the presentation on YouTube: https://www.youtube.com/watch?v=w6sn8bJiZ2g
Got questions about the roadmap? Check out the Q&A over on the Unity forum: https://on.unity.com/2wV3SwD
Take a peek at the slide from the first installment of our 2020 roadmap: Core Engine & Creator Tools.
Watch the presentation (hosted by Will Goldstone, Product Manager) on YouTube: https://www.youtube.com/watch?v=dDjsS4NPqFU
Got questions about the roadmap? Check out the Q&A over on the Unity forum: https://on.unity.com/CreateRoadmapQA
How ABB shapes the future of industry with Microsoft HoloLens and Unity - Uni...Unity Technologies
It's high time for augmented reality to be brought to a wider audience. In ABB, we know that it is not just a gimmick any more. However, with every innovative technology comes new challenges. In these slides, we show how to overcome them and deliver valuable products with Hololens and Unity.
Speakers:
Maciej Włodarczyk - ABB
Rafał Kielar - ABB
Watch the session on YouTube: https://youtu.be/QFsj8Pi_3Ho
Autodesk and Unity announced a collaboration last year to streamline workflows and enable seamless development across the AEC design, build and operate lifecycles. This fall, Unity Reflect launches, giving designers, architects, and engineers the ability to seamlessly federate their Revit models for real-time 3D.
Andrew Sullivan - Digital Delivery Manager, SHoP Architects will provide an overview of how they are using the product to enable real-time decision making, reduce the time between revisions and meetings, and ultimately improve design review and construction planning processes.
Recording available here: https://youtu.be/qe0yxHA0fHI
How Daimler uses mobile mixed realities for training and sales - Unite Copenh...Unity Technologies
Daimler Protics implemented mixed and augmented reality on mobile devices and used the Microsoft HoloLens for automotive production, training, and marketing. Discover the challenges Daimler Protics faced and the Unity solutions that eased the mixed reality implementation.
Speakers:
Daniel Keßelheim - Daimler Protics
Sebastian Rigling - Daimler Protics
Session available here: https://youtu.be/fTc1c8iTGqU
How Volvo embraced real-time 3D and shook up the auto industry- Unite Copenha...Unity Technologies
Hear from Volvo's lead Unity developer, Timmy Ghiurau, about how he broke new ground by bringing technology forged in gaming into one of the leading brands in the automotive industry. Timmy will share how he used his gaming background to inspire people across his large organization to adopt Unity and embrace real-time 3D as a way of working.
Timmy Ghiurau - Volvo
Session available here: https://youtu.be/CD4Go3Uv5Uc
QA your code: The new Unity Test Framework – Unite Copenhagen 2019Unity Technologies
Are you involved in testing or QA on projects in Unity? In these slides, you'll get an overview of the state of Unity for all things testing-related, and have the opportunity to share your stories of success, failure, pain, and glory. Learn from your fellow developers and give feedback on how Unity could help you hold your projects to a higher standard of quality. You will also get an introduction to the newest features in the Test Framework.
Speakers:
Christian Warnecke - Unity
Richard Fine - Unity
Watch the session on YouTube: https://youtu.be/wTiF2D0_vKA
Engineering.com webinar: Real-time 3D and digital twins: The power of a virtu...Unity Technologies
From buildings and infrastructure to industrial machinery and factories, digital twins are becoming integral across the industrial sector. In this webinar, first shown on Engineering.com, leaders from Unity and Unit040, provider of digital twin platform Prespective, share how digital twins add value at all stages of the project and product lifecycle, from the early stages of design to predictive maintenance using IoT data.
Watch the webinar here: create.unity3d.com/real-time-3d-and-digital-twins
Supplying scalable VR training applications with Innoactive - Unite Copenhage...Unity Technologies
Major automotive brands like Volkswagen are leveraging the power of virtual reality to create immersive training programs that can be delivered across multiple global locations at the same time. Learn how to scale the production and distribution of real-time VR training in enterprise.
Speakers:
Thomas Wimmer - Innoactive
Andreea Raducan - Innoactive
Watch the session on YouTube: https://youtu.be/5DNFUTfyOEc
XR and real-time 3D in automotive digital marketing strategies | Visionaries ...Unity Technologies
Augmented reality (AR), virtual reality (VR), and mixed reality (MR) – collectively known as XR – are making inroads in the automotive industry. Join this session led by Visionaries 777, which works with major auto brands like INFINITI, to learn about the range of immersive experiences you can build with Unity to create a better customer experience that results in more engagement and sales.
Speakers:
David Castañeda - Visionaries 777
Frantz Lasorne - Visionaries 777
Session available here: https://youtu.be/WJpeWHGXyms
Real-time CG animation in Unity: unpacking the Sherman project - Unite Copenh...Unity Technologies
Get a complete walkthrough of the end-to-end animation workflow of the Sherman project. Learn how to use Unity for creating CG animation and take a deep dive into the real-time fur system in Unity.
Speaker:
Mike Wuetherick - Unity
Watch the session on YouTube: https://youtu.be/fFfWxErJMkY
Creating next-gen VR and MR experiences using Varjo VR-1 and XR-1 - Unite Cop...Unity Technologies
The developers of Varjo VR-1 learned a lot about human eye resolution and the demands it puts on virtual reality (VR) content. In these slides, you'll explore what next-generation VR can mean for your VR experiences. Learn about what matters the most when it comes to visual quality, the possible caveats, and the role performance requirements play in this equation.
Speaker:
Mikko Strandborg - Varjo
What's ahead for film and animation with Unity 2020 - Unite Copenhagen 2019Unity Technologies
Unity is enabling film and animation studios to revolutionize their pipelines with features developed specifically to empower storytellers who are creating linear and interactive content. Learn more about features such as Python, Shotgun, the Arbitrary Output Variables (AOV) used in Recorder for export, Alembic, and Universal Scene Description (USD).
Speaker:
Mathieu Muller - Unity
Watch the session on YouTube: https://youtu.be/wrc3R-BoDGs
How to Improve Visual Rendering Quality in VR - Unite Copenhagen 2019Unity Technologies
VR allows for an entirely new level of immersion, leading to more thrilling and engaging content to be delivered and is growing rapidly. Despite this, VR, especially on mobile, currently contains a number of limitations, which can make it an unrealistic, unconvincing and, sometimes, an uncomfortable experience. Virtual reality (VR) is a new way to deliver thrilling and engaging content and allows for a deep level of immersion. Despite this, VR, especially on mobile, currently has several limitations, which can make it an unrealistic, unconvincing and, sometimes, an uncomfortable experience. To achieve the true potential of VR, these limitations must be either solved or mitigated. Ways of mitigating these limitations include optimal alpha compositing approaches, texture filtering techniques and bump mapping methods for use with VR content. In these slides, technology company Arm will outline how to improve the rendering quality of your VR content, describing the most common pitfalls and bad practices, before providing clear examples and mitigation solutions of how to best overcome them.
Speaker:
Ryan O'Shea - ARM
Digital twins: the power of a virtual visual copy - Unite Copenhagen 2019Unity Technologies
From buildings and infrastructure to industrial machinery and factories, digital twins are becoming integral revisualization tools across the industrial sector. Learn how Unit040, a company specializing in visualization and simulation, creates digital twins that combine real-time 3D technology with BIM, CAD and CAE systems to add value at all stages of the building and product lifecycle, from the early design phase to predictive maintenance using Internet of Things (IoT) data.
Speakers:
Pieter Weterings - Unit040
Guido van Gageldonk - Unit040
Watch the session on YouTube: https://youtu.be/j4i14p89h_s
Virtual or real? AR Foundation best practices from Krikey - Unite Copenhagen ...Unity Technologies
The AR Foundation toolkit has been critical for Krikey to build compelling AR games that function cross-platform, at scale. Krikey, an AR mobile gaming application, used dynamic ground plane detection and camera translation to enable users to play 3D games that interact with the real world. These slides cover some of the best practices Krikey developed while using AR Foundation.
Speakers:
Ketaki Shriram - Krikey
Jhanvi Shriram - Krikey
Watch the session on YouTube: https://youtu.be/5MKRuJEA1hI
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5. 5
• Started working w/
Oculus for Rift +
GearVR support
• Started working w/
Microsoft for HoloLens
support
2014
2015 2017 2019
2016 2018• “One-click integration” --
landed support for
Oculus, HoloLens, and
PSVR
• Direct platform
implementations
• Landed VR multi-device
support, including
Cardboard / Daydream
• Added shared
implementation
• Landed ARKit and
ARCore support
• New plugin architecture
• Migrated platform
implementations as
packages using plugin
architecture
• AR Foundation
released, first user of
plugin architecture
• Landed Magic Leap
support
• VR abstraction for
display
6. What we’ve learned…
6
Flexibility with Packages
Increased flexibility through
packages, updates decoupled
from Unity core engine
releases.
New AR/VR Features
New AR/VR features are
released at an accelerated
pace.
New AR/VR Hardware
Market will see continued
stream of new devices from
more vendors.
…and our plan to improve.
“Build once, deploy anywhere”
Single framework for using
common features across
multiple platforms (AR
Foundation).
Plugin Architecture
Standardized set of APIs
designed to improve
community’s access to AR/VR
devices and features.
Common Functionalities
Devices share common set of
features across AR and VR –
display, input, etc.
7. New Plugin Architecture
7
— Provides a native API to HMD manufacturers and exposes a high level managed
(C#) APIs to Unity developers
— Multiple backend plugins (providers), implementing individual engine
features (subsystems), exposed as common developer-facing C# APIs
— Runtime discoverable, runtime activation
– Common life-cycle across all subsystems / providers
— Backwards compatibility
8. Subsystems
8
A subsystem is a logical group of hardware and/or software
functionality like display, rendering, input, and more.
It fundamentally improves how we deliver and manage SDKs for
our XR platform integrations.
9. Each subsystem contains…
9
Common engine code
which handles
communicating with the
C# interface, the native
interface, and the rest of
the engine
A native interface which
is implemented by
multiple backends
(Providers) via dynamic
libraries
A developer facing C#
interface
11. Supported Subsystems
11
— Camera
— Depth
— Display
— Environment Probes
— Face Tracking
— Gesture
— Human Body
— Image Tracking
— Input
— Meshing
— Object Tracking
— Planes
— Raycast
— Reference Points
— Session
12. Getting Started
12
— All officially supported platforms are now implemented as
packages
– Provider releases now decoupled from Unity core engine releases
— Entry Point: “XR Plugin Management” Package
14. What’s Next?
14
— Migration of platform SDK implementations as packages
with new plugin architecture landing as verified in 2019.3
— Direct platform implementations will be marked as
deprecated in 2019.3
— Continued improvements in UI/UX experience with “XR
Plugin Management” package
Join the conversation on Unity’s XR forum!
— “XR Plugins & Subsystems”
Editor's Notes
Plugin Architecture:
This new architecture will allow for easier device integration into Unity in the future. This will also allow Unity developers greater accessibility to devices and features in the future.
The architecture also allows us to more quickly respond to new features of the industry and able to deploy them to unity developers quicker.
Subsystems and the APIs that devs use to interact with them are designed to be completely independent from one another. And even may or may not be present at all depending on the platform. So as a real-life example: let’s say you are writing an AR experience that can use 2D image recognition to trigger some behavior but prefers using 3D object recognition to trigger that behavior: your cross-platform code could query for the presence of a ObjectTracking Subsystem. If that subsystem is available you can use it. But for platforms where it isn’t available the application can gracefully fall back to using the ImageTracking subsystem. Again, none of the application code needs to know about any specific platform with this architecture: the code can simply query the availability of a particular feature.
In addition, we could have a scenario where two platforms both provide Plane Tracking but one of them only detects horizontal planes and another detects both vertical and horizontal planes. Minor differences in capability like this can be expressed via metadata called a Subsystem Descriptor. In that simple example a readonly C# property expresses the underlying platform’s capabilities but still does so in a functionality-focused manner rather than in a platform-specific manner.
Provider Framework:
This layer defines the implementation of the platform and device-specific SDKs, written against predefined subsystem interfaces that connect to the Interface Layer. The Provider framework also handles the translation of platform-specific representations into platform-agnostic subsystem data.
Interface Layer:
This layer contains the optimized core engine implementation that will execute provider code written against the predefined subsystem interfaces. Note that subsystem APIs purely provide data - not GameObjects.
Developer Framework:
This layer exposes the functionality of the subsystems in a developer-friendly way, which includes game object-based representations of the data we get from APIs. Again, these are the public APIs that we encourage developers to code against.
So, how does this impact your workflow and why should I care?
The developer framework, or AR Foundation, as well as the individual providers (like Oculus, Windows MR, Magic Leap, etc.) are all distributed via the Unity Package Manager. And that’s great because it allows developers to get new functionality and bug fixes without the need to upgrade to an entirely new version of Unity. This allows for increased flexibility where updates to the SDKs can be accessed outside of the core Unity release cycle.
This is great, but we realize that loading and managing all of these packages, for the various platforms you want to build for, can get cumbersome. So, we’ve created the XR Plugin Management package, designed to be a single entry point for exactly that, loading and managing the various platform SDKs you want to target.
Here’s what it looks like in the editor. Instead of going to Player Settings as a first step, the settings for XR and SDKs will now appear under Project Settings. Before that though, you’ll need to download the XR Plugin Management package from the Package Manager, making sure you enable preview packages. The XR Management package will now serve as the main entry point for loading the right package for each target SDK/platform and managing respective settings. XR Management is also needed to make the XR Settings show up in Project Settings. Once downloaded, the XR Management package will take you to Project Settings, where the loading and management of supported XR platforms will take place.