Presentation of the Allosphere project at UCSB. Imagine a 3 story high sphere suspended in a cube where 3D video and audio are used for scientific discovery and exploration.
Presentation of the Allosphere project at UCSB. Imagine a 3 story high sphere suspended in a cube where 3D video and audio are used for scientific discovery and exploration.
Its a Seminar PPT on Input Devices such as Light Pen & Voice Recoganizer.This is a simple seminar which is easy to understand at the first view itself.
For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations.
Monitor is an out put device of the computer.
It is like a TV, that displays text and graphics on the screen.
Video adapters are responsible for delivering the images to the monitor.
The e-ball is a sphere shaped computer concept which is the smallest design among all the laptops and desktops have ever made.
This ball predicts the future of computing.
You might have seen many things like this but the unique feature of this ball is that when it is closed, no one can guess that whole computer is hidden inside this ball.
Slides for the Information Visualisation unit of my 2013 online course on HCI – part 3
https://hcibook.com/hcicourse/2013/unit/10-physicality
* study the old to design the new – exposing the latent lessons in well-designed products
* understanding the multiple feedback loops in digital devices with physical form
* model physical device states ofthe device ‘unplugged' using physigrams
An output device is any peripheral that receives data from a computer, usually for display, projection, or physical reproduction. For example, the image shows an inkjet printer, an output device that can make a hard copy of any information shown on your monitor.
Aslam.o.aliakum!
i am shakaib ashraf this topic is for my students who want to do presentation on virtual reality
introduction
types of virtual reality
how virtual reality works
these topics are available in this presentation
pray for me
thankyou
Its a Seminar PPT on Input Devices such as Light Pen & Voice Recoganizer.This is a simple seminar which is easy to understand at the first view itself.
For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations.
Monitor is an out put device of the computer.
It is like a TV, that displays text and graphics on the screen.
Video adapters are responsible for delivering the images to the monitor.
The e-ball is a sphere shaped computer concept which is the smallest design among all the laptops and desktops have ever made.
This ball predicts the future of computing.
You might have seen many things like this but the unique feature of this ball is that when it is closed, no one can guess that whole computer is hidden inside this ball.
Slides for the Information Visualisation unit of my 2013 online course on HCI – part 3
https://hcibook.com/hcicourse/2013/unit/10-physicality
* study the old to design the new – exposing the latent lessons in well-designed products
* understanding the multiple feedback loops in digital devices with physical form
* model physical device states ofthe device ‘unplugged' using physigrams
An output device is any peripheral that receives data from a computer, usually for display, projection, or physical reproduction. For example, the image shows an inkjet printer, an output device that can make a hard copy of any information shown on your monitor.
Aslam.o.aliakum!
i am shakaib ashraf this topic is for my students who want to do presentation on virtual reality
introduction
types of virtual reality
how virtual reality works
these topics are available in this presentation
pray for me
thankyou
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Mixed Reality Interfaces and Product ManagementJeremy Horn
Slides Vikas Batra recently used in his discussion w/ mentees of The Product Mentor.
Synopsis: In this talk Vikas will share recent developments in the field of Virtual Reality(VR) and Augmented Reality (AR) . Share use-cases on how AR is being used by enterprises to help you identify how you could use it to gain competitive advantage in your market.
The Product Mentor is a program designed to pair Product Mentors and Mentees from around the World, across all industries, from start-up to enterprise, guided by the fundamental goals…Better Decisions. Better Products. Better Product People.
Throughout the program, each mentor leads a conversation in an area of their expertise that is live streamed and available to both mentee and the broader product community.
http://TheProductMentor.com
Building the Matrix: Your First VR App (SVCC 2016)Liv Erickson
The slides from my talk, Building The Matrix: Your First VR App at Silicon Valley Code Camp, Oct. 2016. Development, design, and sample projects for virtual reality applications.
• 2016-01-26 Presented on group meeting
• [UIST 2015] FoveAR: Combining an Optically See-Through Near-Eye Display with Spatial Augmented Reality Projections
by Hrvoje Benko, Eyal Ofek, Feng Zheng, Andrew D. Wilson
Virtual Reality is a newly introduced technology that allows to replace the real word with synthetic one .It makes people believe that they are in another real.
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
15. 15
High Level VR Software Stack
VR Runtime
Server Compositor
DirectX (+ other system APIs)
Windows
VR Application Browser
VR
Hardware
OpenXR
APIs
WebVR
Game
Engine
Application
interface
Driver(HW)
interface
16. • Takes Windows surfaces from
multiple applications
• Composites onto a single
desktop in 2.5D
• Handles transparency/
translucency
Windows Compositor… Desktop Window Manager (DWM)
17. VR Compositor
T1 T2 T3
• Composition
• Time Warp
• Barrel Distortion
• Chromatic Aberration
Correction
VR Compositor
18. 18
• Head and
controller positions
• Controller state
• Input events
• Game images
• Haptics events
Information Flow
21. VR Vision – The PC Platform
The PC is doing more than ever to stimulate the user and drive their senses
Physicallydriven audio
Audio effects that are true to life, reflecting off
objects and surface materials, accounting for
distance and orientation.
Intense graphics
Higher resolution, faster frame rates, and depth capable
displays, fully immerse players in the experience.
Hands-free input
Depth cameras incorporated into HMDs and
connected to the PC track arm and hand
movement allow controller-less input
Multiplayer support
Simulation accounts for multiple players in the
same room – including both players VR experience
being driven off of one high-performance PC.
Wireless freedom
High-speed, low-latency wireless
driving of all peripherals.
Haptic feedback
Vest and controllers mimic in-game
interactions to further stimulate the
player and make interactions between
teammates more physical.
Vest reacts with jolt from enemy
weapon fire.
Vest and controllers relay low health
condition.
Physically accept a healing item
handed to you by a teammate. Object
heft is conveyed to you.
1
2
3
We believe VR will evolve to allow multi-modal input and feedback. Here we show a variety of
the technologies that the PC will be providing feedback to the user.
CPU Workloads
Particle effects and physics, Artificial
intelligence, Destruction and Object Persistence,
Initialization & level setup, Spatial binaural 3D
audio, frame rate, mega-tasking workloads
22. VR Vision – The Sensor Platform
Fully immersing users in VR will include full awareness of their state and movement
Facialexpressiontracking
Mask sensors map your reactions to your in
game character, enhancing non-verbal
communication and heightening involvement.
Object scanning
Depth cameras in HMD allow scaling real
world objects to bring in game. Swing a
favorite toy sword or bring a child’s art class
project into social spaces.
Eye tracking
Players eyes are reflected in their in game
character, expressing emotion and intent.
Gaze tracking can unlock UI triggers, and
drive rendering performance focus.
Biometric sensors
Heart rate, pupil dilation and galvanic skin
response can be used to convey
emotional responses such as fear or
excitement.
Responses can be analyzed to tailor game
difficulty, give cues to other players, or to
limit fatigue.
At the same time, the PC will be capturing a wide array of conscious and unconscious input
from the player, and feeding this back to the simulation for processing
Intuitive Controllers
Controllers evolve to intuitively
communicate greater degrees of intent, with
analog triggers, contact sensing, and greater
degrees of affordance
Physics-driven presence
Motion tracking of HMDs and controllers, combined with
3D cameras, and CPU-driven physics are used to track and
confer full body articulation .
23. 23
3rd
Person Camera AI
(Yoann PignoleGamasutra2015)
GFX
Source (PC) Sink (HMD)
Panel
(Output)
Sensors
(Input)
Render
Time Warp
Barrel Distortion
Chroma Correction
Display
(Scalar)
Pose Input
Imaging Data
Tx Rx
USB Display
Displaylink Wireless VR
Intel’s partnership with ISVs
on VR content
Intel Projects
24. 2020 Virtual Reality Vision
Adventure together with friends, compete with rivals,
or bring the world along for the show
Open Innovation & Best of class performance allow VR to thrive and lead on PC
Usage: Truly Immersive multiplayer gaming allows players to travel
to other worlds, whether next to each other or across the globe
Focus areas: Gaming, eSports, Social
Key capability areas (hypotheses)
• Mind-tripping quality: Photo-real, deep behavioral simulation,
responsive enough to exceed sensory reqs. “Believed I was there”
• Wireless HMDs and peripherals and/or wearable form factors for
unhindered and unlimited movement
• Multi sensorystimulation: physiological-transparent quality
display & graphics, physically modeled 3D Audio, (haptic
stimulation), (vestibular dampening),
• Multi-model input, sensing, and responsive simulation: eye-
tracking, expression tracking, tactile feedback, biometric feedback
(galvanic response, heart rate, etc), voice & outside audio cuing,
environment capture
• Social & competitive: Your expressions & movements driving your
presence in virtual interactions with others
New York Los Angeles London
WW Spectators
25. The Future of AR/VR/MR Gaming & OpenXR
• Engaging more senses, better experiences
à more immersion
à more presence
• More innovation
25
26. The Future of AR/VR/MR Gaming & OpenXR
• Engaging more senses, better experiences
à more immersion
à more presence
• More innovation
• Dependent on:
à more content more places
à more hardware more places
à easier cross-platform development & testing
26