This document summarizes demos presented at the Intel Collaboration Center in March 2014. It describes 28 demos covering areas such as collaborative robots, wireless charging, augmented reality paper, in-vehicle infotainment systems, biometric authentication technologies, and perceptual computing applications. Each demo is accompanied by one or more bullet points briefly describing its goals and capabilities. The document concludes with legal disclaimers regarding the information presented.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/getting-efficient-dnn-inference-performance-is-it-really-about-the-tops-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Gary Brown, Director of AI Marketing at Intel, presents the “Getting Efficient DNN Inference Performance: Is It Really About the TOPS?” tutorial at the September 2020 Embedded Vision Summit.
This presentation looks at how performance is measured among deep learning inference platforms, starting with the simple peak TOPS metric, why it’s used and why it might be misleading. Brown looks at compute efficiency as measured by real benchmark workload performance and how it relates to peak TOPS, comparing performance across Intel’s inference platforms. He also discusses how developers can use Intel’s DevCloud for the Edge to quickly access Intel’s inference platforms.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/getting-efficient-dnn-inference-performance-is-it-really-about-the-tops-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Gary Brown, Director of AI Marketing at Intel, presents the “Getting Efficient DNN Inference Performance: Is It Really About the TOPS?” tutorial at the September 2020 Embedded Vision Summit.
This presentation looks at how performance is measured among deep learning inference platforms, starting with the simple peak TOPS metric, why it’s used and why it might be misleading. Brown looks at compute efficiency as measured by real benchmark workload performance and how it relates to peak TOPS, comparing performance across Intel’s inference platforms. He also discusses how developers can use Intel’s DevCloud for the Edge to quickly access Intel’s inference platforms.
XR viewers are a new category of AR or VR devices that allow for lighter and smaller designs since they are connected to smartphones or other computer accessories. For more details, check out this great webinar, which has been adapted from a presentation we gave at AWE 2019.
An overview of Bluetooth Smart (Low Energy) for Android. This was presented to the Android Australia User Group in March 2014 in Melbourne, Australia. We explore Bluetooth Smart advantages, support on Android devices, look at Apple's iBeacon technology and emerging Bluetooth smart services.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/07/certifying-neural-networks-for-autonomous-flight-a-presentation-from-daedalean-ai/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
David Haber, Head of Deep Learning at Daedalean AI, delivers the presentation “Certifying Neural Networks for Autonomous Flight” at the Edge AI and Vision Alliance’s May 2020 Member Briefing meeting.
Deep neural networks have demonstrated impressive performance on visual recognition tasks relevant to the operation of airborne vehicles such as autonomous drones and personal electric air-taxis. For this reason, their application to visual problems, including object detection and image segmentation, is promising (and even necessary) for autonomous flight. The downside of this increased model performance is higher complexity, which poses challenges related to interpretability, explainability and (eventually) the certification of safety-critical aviation applications.
For example, how do you convince the regulators (and ultimately the public) that your model is robust to adversarial attacks? How do you prove that your training and testing datasets are exhaustive? How do you test edge cases when your input space is infinite and any mistake is potentially fatal? Over the last year, Daedalean AI has partnered with EASA (the European Union Aviation Safety Agency) to explore how existing regulations around safety-critical applications can be adapted to encompass modern machine-learning techniques.
In this talk, Haber discusses the different stages of a typical machine learning pipeline as they relate to design choices for neural network architectures, desirable properties for training and test datasets, model generalizability, and how to protect against adversarial attacks. He also considers the opportunities, challenges and learning that may apply more generally when building AI for safety-critical applications in the future.
A late upload. This slide was presented on Aug 31, 2019, when I delivered a talk for AIoT seminar in University of Lambung Mangkurat, Banjarbaru. It's part of Republic of IoT 2019 event.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
IP's 20 year evolution - adaptation or extinction Design And Reuse
From its infancy 20 years ago, the semiconductor IP industry has evolved into the major driving force of today’s semiconductor landscape. This talk will take a historical view of the changes in the industry over the past twenty years, looking at how the landscape (environment) has changed and how individual companies either adapted or simply went away. What will the IP industry and players in the future look like?
This presentation describes the future perspectives of embedded devices due to the spread of ubiquitous applications. The presentation shows the transition from Internet of Things to Web of Things and presents Webinos as a platform for WoT
Opportunities & Challenges in IoT - Future of IoT industry in Indonesia 2019 ...Andri Yadi
It's a late share. I was honored to represent Indonesia IoT Association to discuss about the future of IoT industry in Indonesia - the opportunities and challenges for years to come. It's during FGD of Development of National IoT Industry 2019-2024.
U NIVERSAL ICT D EVICE C ONTROLLER FOR THE V ISUALLY C HALLENGEDIJCI JOURNAL
With today's modern lifestyle, ICT devices that wer
e once considered as luxuries have turned into
necessities. One of the main problems associated wi
th these ICT devices is that they all come with sep
arate
remote controllers. All these remote controllers ha
ve got different buttons which are designed in thei
r own
customized way. So there is a lack of commonality o
r interoperability between different devices or bet
ween
different vendors. Now, this becomes a major proble
m when the visually challenged people need to use
these devices. In this paper, we present a novel ap
proach which acts as a universal intelligent remote
controller for all the electronic devices and which
is extremely user-friendly to the visually challen
ged. It
uses two transceivers- one at the ICT device end wh
ich is used to communicate directly with the ICT de
vice
and the other is the user end device i.e. the Smart
phone
XR viewers are a new category of AR or VR devices that allow for lighter and smaller designs since they are connected to smartphones or other computer accessories. For more details, check out this great webinar, which has been adapted from a presentation we gave at AWE 2019.
An overview of Bluetooth Smart (Low Energy) for Android. This was presented to the Android Australia User Group in March 2014 in Melbourne, Australia. We explore Bluetooth Smart advantages, support on Android devices, look at Apple's iBeacon technology and emerging Bluetooth smart services.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/07/certifying-neural-networks-for-autonomous-flight-a-presentation-from-daedalean-ai/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
David Haber, Head of Deep Learning at Daedalean AI, delivers the presentation “Certifying Neural Networks for Autonomous Flight” at the Edge AI and Vision Alliance’s May 2020 Member Briefing meeting.
Deep neural networks have demonstrated impressive performance on visual recognition tasks relevant to the operation of airborne vehicles such as autonomous drones and personal electric air-taxis. For this reason, their application to visual problems, including object detection and image segmentation, is promising (and even necessary) for autonomous flight. The downside of this increased model performance is higher complexity, which poses challenges related to interpretability, explainability and (eventually) the certification of safety-critical aviation applications.
For example, how do you convince the regulators (and ultimately the public) that your model is robust to adversarial attacks? How do you prove that your training and testing datasets are exhaustive? How do you test edge cases when your input space is infinite and any mistake is potentially fatal? Over the last year, Daedalean AI has partnered with EASA (the European Union Aviation Safety Agency) to explore how existing regulations around safety-critical applications can be adapted to encompass modern machine-learning techniques.
In this talk, Haber discusses the different stages of a typical machine learning pipeline as they relate to design choices for neural network architectures, desirable properties for training and test datasets, model generalizability, and how to protect against adversarial attacks. He also considers the opportunities, challenges and learning that may apply more generally when building AI for safety-critical applications in the future.
A late upload. This slide was presented on Aug 31, 2019, when I delivered a talk for AIoT seminar in University of Lambung Mangkurat, Banjarbaru. It's part of Republic of IoT 2019 event.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
IP's 20 year evolution - adaptation or extinction Design And Reuse
From its infancy 20 years ago, the semiconductor IP industry has evolved into the major driving force of today’s semiconductor landscape. This talk will take a historical view of the changes in the industry over the past twenty years, looking at how the landscape (environment) has changed and how individual companies either adapted or simply went away. What will the IP industry and players in the future look like?
This presentation describes the future perspectives of embedded devices due to the spread of ubiquitous applications. The presentation shows the transition from Internet of Things to Web of Things and presents Webinos as a platform for WoT
Opportunities & Challenges in IoT - Future of IoT industry in Indonesia 2019 ...Andri Yadi
It's a late share. I was honored to represent Indonesia IoT Association to discuss about the future of IoT industry in Indonesia - the opportunities and challenges for years to come. It's during FGD of Development of National IoT Industry 2019-2024.
U NIVERSAL ICT D EVICE C ONTROLLER FOR THE V ISUALLY C HALLENGEDIJCI JOURNAL
With today's modern lifestyle, ICT devices that wer
e once considered as luxuries have turned into
necessities. One of the main problems associated wi
th these ICT devices is that they all come with sep
arate
remote controllers. All these remote controllers ha
ve got different buttons which are designed in thei
r own
customized way. So there is a lack of commonality o
r interoperability between different devices or bet
ween
different vendors. Now, this becomes a major proble
m when the visually challenged people need to use
these devices. In this paper, we present a novel ap
proach which acts as a universal intelligent remote
controller for all the electronic devices and which
is extremely user-friendly to the visually challen
ged. It
uses two transceivers- one at the ICT device end wh
ich is used to communicate directly with the ICT de
vice
and the other is the user end device i.e. the Smart
phone
AXONIM Devices offers services of digital consumer devices design and new product development. Highly skilled engineering teams perform whole development projects starting from scratch suggested by our customer and future devices architecture, functional and structural models design.
We prepare full design documentation for the manufacturing of the device enclosure, 3D model, and design documentation of the digital device. Our specialists select and purchase components, run production and assembly of printed circuit boards (PCB), test the assembled device.
Having strong background in embedded systems design our developers execute all required tasks within development cycle: schematic design, PCB design, firmware design and development, FPGA design, digital signal processing, porting and adapting embedded operating systems for a given platform, BSP and driver development, application development, operator interface or user application development, device prototyping, manufacture support.
While mobile data gave way to the digital service economy
that fueled the growth of the public cloud, 5G and the Internet
of Things (IOT) promises to unleash a similar disruption at
the edge.
OpenNESS offers cloud and IOT developers an easy-to-use reference software toolkit to create and deploy applications at the on-premise and network edge locations. By simplifying
complex networking technology, OpenNESS exposes standards-based APIs from 3GPP and ETSI to application
developers. Within the toolkit, applications can steer data traffic
intended for the edge at 5G latencies and provide connectors to analytics and cloud service provider frameworks. With
this open source toolkit, application developers can port applications created for the cloud to the edge, and run the
applications at any edge location.
The Wireless Remote Control Car Based On Arm9IOSR Journals
Abstract: TheInternetof Things (IoT) are of great importance in promoting and guiding development of information technology and economic. At Present, theapplicationoftheIoT develops rapidly, but due to the special requirements of some applications, the existing technology cannot meet them very good. Much research work is doing to build IoT. Wi-Fi basedWirelessremote control has the features of high bandwidth and rate, non-line-transmission ability, large-scale data collection and high cost-effective, and it has the capability of video monitoring, which cannot be realized with RF. The research on Wi-Fi based remote control car has high practical significance to the development oftheInternetof Things. Based on the current research work ofapplicationsthe characteristics of Wi-Fi, this paper discusses controlling the car by using Wi-Fi module along with the conditions can be monitored through remote PC or Lap top which supports Wi-Fi technology. In PC or Lap top two tabs are present. In the first tab we can monitor the conditions and in the second tab four buttons are present to control the car in forward, back ward, left side and right side directions. Keywords: S3C2440 (ARM9), Wi-Fi Module, Camera, DC motors with driver IC and laptop with Wi-Fi module.
2. (#013) PALRO*
Communication Partner Robot
A new collaborative relationship.
Robot to human, robot to device.
Voice activated ease of use
Work with companion devices via WLAN
(PC, camera, smartphone, tablet)
Retrieving info from the Internet
(news, weather ….. and the Cloud)
Free SDK provided to develop unique
application development
Optimal usages at nursing homes
(in line with Ministry of Health, Labor and
Welfare guideline to prevent vital function
disorder)
2
Collaboration:
Fujisoft Inc. *Third-party brands and names are the property of their respective owners.
3. (#015) Wireless Charging
Embedded wireless charging
system allows you to place a
laptop on the table and get
charged.
AC adaptor free
Place & charge
No wires
A step to unifying the various
types of adaptors
3
本デモは、村田製作所、コクヨファニチャーおよびインテルが試作したコンセプトです。
本技術を搭載したUltrabook™製品の発売などの予定はございません。
本デモは、A4WPにて推進しているRezenceとは異なる技術方式のものです。
Collaboration:
Murata Manufacturing Co., Ltd.
Kokuyo RDI Center
4. (#021) ON THE FLY* PAPER
ON THE FLY* PAPER is
completely new infotainment
system with corroborating
with “paper” and digital
contents by applying the
projection mapping.
If place “Paper” card on the table,
Text and Multimedia data will be
magically appeared , mapped on
paper.
4
Collaboration:
takram design engineering
5. (#022) TIZEN* IVI connected car experience
Linux* based TIZEN* IVI is
enabling common SW platform
to foster open innovation and
adapt to emerging standards.
This demonstration showcases the
smooth animation and beautiful
interface available with ZENRIN
DataCom 3D Navigation Engine on
Tizen* IVI. The ZENRIN DataCom 3D
Navigation Engine boasts native support
for Tizen* IVI rendered in HD with
OpenGL. The system also features ultra-
smooth transitions, lightning fast
rendering of map data and ZENRIN
DataCom’s new easy-to-follow lane
guidance functionality.
5
Collaboration:
ZENRIN DataCom *Third-party brands and names are the property of their respective owners.
6. (#023) Ethernet AVB and full digital speaker
6
Ethernet-AVB
Digital Speaker
Ethernet-AVB
Ethernet -AVB HUB with
PoE
Bluetooth
*AVB: Audio Video Bridging
*PoE: Power Over Ethernet
Collaboration:
D-CLUE Technologies Co.,Ltd.
Onkyo Corporation
Trigence Semiconductor
Audio stream are delivered by
Ethernet AVB and play back
on full-digital speaker.
Ethernet AVB system realizes
reduction of the number of
cables and efficient power
consumption.
High quality sound full-digital speaker allow
to reduce analog amp and high power.
Integrate digital speaker and PoE capable
ethernet AVB. Ethernet cable can provide
both power and audio steam.
7. (#024) Eye Tracking
- Next generation user interface
Eye Tracking enables you to offer a
completely new user experience on
devices. Intuitive, natural and
blazingly fast.
Eye Tracking is helping to open up
a new world of possibilities for
computer manufacturers and
software developers. A variety of
usages:
Game
Automobile
Computer interaction
Medical
Security
7
Image of Eye Tracking
Collaboration:
Tobii Technology K.K.
8. (#025) Intel® RealSense™ Technology
- Perceptual Computing -
Eyes, Ears, Voice, Touch,
Emotion and Context for an
immersive, intuitive and
exciting life-like experience
Voice
Speech Recognition
3D capability
Facial Tracking
Close-Range Tracking
– Poses
– Finger, Hand Tracking
– Gesture
Augmented Reality
Intel PerC Challenge II Grand Prize Winner, Shikumi Design
Perceptual Computing Drives Innovations
Perceptual Computing Roadmap
9. (#026) Olympus Media Glancer (MEG)*
9
Illustration : Masako Mori
Collaboration:
Olympus Corporation
“Inspiring Notification” is an
essential element in enriching
people’s lives. This prototype
provides an experience to notice
exciting things in life or creating
new communication networks.
Non dorky looking ultralight weight, small
form factor and wireless communication
True hands free operation
See-through optics backed by original
“Olympus Pupil Division See-Through
Technology” enables high visibility, no
interference of sight, brightness under
sunlight operation
*Third-party brands and names are the property of their respective owners.
*“MEG” is a development code name
10. (#027) Rapid Prototyping using
Intel® Edison Platform
Intel® Edison platform is a Wi-Fi/Bluetooth
capable SD card size embedded computer
that can be used to rapidly prototype
application concepts.
Developers can see Intel’s approach to
designing small form-factor IoT and
wearable device prototypes for functional
testing and rapid design iteration.
Prototype development approach to
Mechanical
Electronics
Software
10
Wand
Cup
Badge
Intel® Edison platform
11. (#028) Palm Vein Hybrid Authentication System
11
Collaboration:
Universal Robot co. ltd,.
Palm Vein Hybrid Authentication
uses visible/optic imaging for
detecting vein patterns. Best Fit
for Smart Phone and Tablet
Environment.
Use Standard Smart Phone Camera for
Palm Vein Authentication
Proprietary 3D Image Correction
Technology
Complete Software Solution that Run on
Any Device
Achieved Higher Accuracy than Finger
Print
Excellent Portability. Applicable for multi
platforms.
12. (#029) Palm vein user authentication
Ultra small as a 500 yen coin !
W 25mm, T 6mm, Weight 4g
12
Collaboration:
Fujitsu Ltd.
Fujitsu Frontech Ltd.
Palm vein authentication usage images
Tablet integration image
Your palm becomes your key to
authenticating yourself with this
ultra small non-touch palm vein
sensor. Living a safe & secure
digital life with the ease of just
showing your palm.
Integrating on Ultrabook™, Tablets
Ease of use by showing your palm
High precision biometric
authentication adopted by various
industries including financial, medical, and
public institutions
Available for anyone to use
Hard to steal, complex to copy
13. (#030) 3D Face Recognition – FastAccess* 3D
Computer recognizes your face
in 3D.
With advanced face recognition
and depth sensing photo/video
rejection, FastAccess* 3D makes
logging into Windows* and
websites fast, simple, and ultra
convenient.
13
Collaboration:
Sensible Vision *Other names and brands are the property of their respective owners
14. (#031) Wireless Docking Mat
14
A wireless charging mat which allows
seamless power supply to your PC
with easy installation. Experimenting
communication techniques along
with charging to meet the mobility
needs of computing.
Wireless charging mat
Easy installation
Simple use of charging
Wireless communication implementation
Full wireless docking solution in near
future
Collaboration:
Murata Manufacturing Co., Ltd
Easy installation
Charging PC from mat
15. (#032) Realtime Super Resolution
15
Intel® Xeon Phi™ coprocessor enable
HD video up-convert in realtime. Old
SD video can convert to HD in
realtime.
Express5800/HR120a-1 is a 1U rack
server designed for the efficient
processing of big data. It can incorporate
up to two Intel® Xeon® processor E5-
2600 product family devices together
with up to two Intel® Xeon Phi™
coprocessor boards mounted in PCI slots.
As each Intel® Xeon Phi™ coprocessor can
incorporate 50 or more cores, each of
which is capable of executing four
threads, a single server can
simultaneously execute a maximum of
520 threads in parallel. Intel C++
Composer XE contribute development.
Collaboration:
NEC Corporation
16. (#033) QoE-based Video over Wireless
Enhance the quality of
experience for adaptive video
streaming applications on
mobile devices taking into
consideration the end-to-end
wireless channel condition
By using quality side-information with
the video, buffer state information on the
device, and QoE-based adaptation
algorithms on the mobile device,
Improved video quality
Lower re-buffering events
More efficient wireless bandwidth usage
16
Subjective Quality Estimator
Content Analyzer
Device Detector
Objective Quality
Calculator
MOS Estimator
Estimated MOS
Spatial details (S)
Low High
Motion level (M)
Low
High
MS-SSIM
1920x1080
800x480
1280x720
Display resolution (R)
Display devices (D)
SERVICE
PROVIDER
NETWORK
Video-
Network
API
Client-
Network
API
End-to-end QoE Adaptation
17. (#034) Video Chapter Search by Text
Collaboration:
Advanced Media, Inc
Gakkaihousou co.,Ltd.
17
Allows users to jump instantly
to the scene which they want to
watch, simply by entering the
keyword
New way of watching videos
Avoid watching the whole video
Suitable for education contents
18. (#035) Finger Gesture Solution for Medical
Implementation of Perceptual
Computing in the medical
application with Finger Gesture
Solution
Combination of Finger recognition
technology based on NEC’s original
algorism and sensor device
Touch free operation with finger
movement enables to control image
viewer in the surgery room and
digital signage in the public space
A terminal at medical offices require
sanitary environments.
18
Collaboration:
NEC Corporation
NEC System Technologies, Ltd.
19. (#036) Digital Gacha
19
Collaboration:
V-Sync Co., Ltd.
NEC Corporation
Touch-enabled digital Gacha
carrying the standard digital
signage platform, OPS (Open
Pluggable Specification)
Implementing Intel® Core™ i5
processor and Windows* 8.1 over
the OPS platform
Allows a different variety of Gacha
applications to be run on the
platform
Gacha mode
Vending machine mode
Drawing mode
*Other names and brands are the property of their respective owners
20. (#037) Like it!-a-long
Users can express real-time
feedback on the demos and
data will be visualized via the
cloud. This continuous
experience stimulates further
collaboration opportunities.
Various technologies collaboratively work
to serve the same objective, users will
find a continuum usage model
System flexibility allows other demos to
operate together
Makes you think about the balance
between BIGDATA & privacy in our real
life society
20
Visitors are sending “Like it!” data during demo
Active participation with visualized feedback
A trip report based on your experience
21. (#038) DaaS (Display as a Service)
This research replaces today’s physical
link between the computer and the
display with standard wired, or
wireless, network connection. This will
allow displaying video content from any
number of systems to any number of
displays in the same network without
the need for physical cables. It allows
One-to-Many displays, which can
applicable to retail store environment.
Many-to-One display, which can be
applicable for family entertaining
Many-to-Many display, which can be used
to create productive collaboration
environment that can span across the
world
2121