Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Super convergence of autonomous things
1. Super-Convergence of Autonomous Things
Mohammad Fairus Khalid
MIMOS Berhad
Kuala Lumpur, Malaysia
fairus.khalid@mimos.my
Hong Hoe Ong
MIMOS Berhad
Kuala Lumpur, Malaysia
hh.ong@mimos.my
Buhary Ikhwan Ismail
MIMOS Berhad
Kuala Lumpur, Malaysia
ikhwan.ismail@mimos.my
Rajendar Kandan
MIMOS Berhad
Kuala Lumpur, Malaysia
rajendar.kandan@mimos.my
Abstract— Nowadays the industry has progressed from basic
mechanical assist systems to fully autonomous things such as
advanced robotics, driverless vehicles and monitoring drones.
The use of autonomous things is the new revolution. In addition
to IT support devices such as smartphones and computers we
have systems that physically interact with the world and assist
us with daily tasks and work. Although there are numerous
discussed benefits the realization of this autonomous things is
still lacking. This paper will explore the various implementation
to ease the technology adoption and proposed high level
architecture for the solution.
Keywords— Cloud Computing, Edge Computing, Robotic
Cloud
I. INTRODUCTION
The fourth industrial revolution start to change the way we
live and work. It targeted to elevate global income levels and
increase the quality of life for population across the globe. The
technological innovation will lead to optimize used of
resources and improve productivity. Transportation and
communication costs will drop, logistics and global supply
chains will become more effective, and the cost of trade will
diminish, all of which will open new markets and drive
economic growth. In the fourth Industrial revolution human
and autonomous things work together to create better future
[1][2][3][4].
Although the benefits is well presented the
implementation of autonomous things are still below
expectation. A study done by Boston Consulting Group found
that many companies have high ambitions for deploying
autonomous things as part of transition to advanced
automation. According to the study across industries, more
than 80% of participants say that their company has already
gained experience in deploying advanced robots.
Nevertheless, only 11% of participants say that their company
has successfully implemented such systems in multiple areas
of their production facilities. The report mentioned robotics
systems’ low levels of maturity and performance as the main
reasons for the low success rate [5].
Studies discovered similarities between the personal-
computer and the personal-robot industries in their early years.
The technologies are fragmented with different hardware and
software platforms. Their operational purpose are inflexible
and limited. With the influx of the newer hardware and
software trends which are modular and open, this helps to
paved the way for the innovation [4].
In this paper we explore existing platforms that help to
ease the autonomous things implementation and we share a
proposed high level architecture for the solution. This paper
organizes as follows: Chapter 2 explains some core
technology definitions to help us in understanding the topic.
Chapter 3 discusses about autonomous things implementation
complexity. Chapter 4 shares the existing platforms that target
to help the implementation. Chapter 5 discloses a proposed
high level architecture to the solution. The last chapter we
summarize our discussion and other areas potentially to be
explored.
II. DEFINITIONS
A. Autonomous Things
Industrial robots, surveillance drones and driverless car are
some of example of the autonomous things. Their
mechanization goes beyond the automation provided by fixed
rule-based programing models and they exploit artificial
intelligent to deliver advanced behaviors that interact
intuitively with the environments and human [6].
The autonomous capability is demonstrated best in open
and dynamic environment. In this setting for example the
robotic system is assisting people in their daily lives at work,
in their house and for leisure. This technology also is helping
aging, ailing and disability people.
B. Cloud Robotics
Kehoe in his paper titled “A Survey of Research on Cloud
Robotics and Automation” defined cloud robotics as follows:
“Any robot or automation system that relies on either data or
code from a network to support its operation, i.e., where not
all sensing, computation, and memory is integrated into a
single standalone system.” [7].
By definition the cloud robotics does not limit the
infrastructure to be located at a centralized location i.e. the
physical cloud servers. The infrastructure can be in distributed
and hierarchical forms. The autonomous things can offload
the processing power to a local computing devices located
nearby the robots. This computing approached are called edge
computing. The processing can also be extended to larger
computing resources such as the cloud computing itself. The
processing locality depend on the resource workload
requirements [8].
2. C. Super-Convergence
Super-Convergence term is formulated from converged IT
infrastructure definition. In data center management the term
“converged” refers to an approach to packages servers, storage
and networking as a single entity and manage them through
virtualization technologies. The objective of converged
infrastructure is to reduce data center management complexity
[9]. Super-Convergence apply to the robotics cloud which the
scope is more than data center management. It combines the
data center resources, edge computing and robotics. In
addition the integration goes beyond managing the hardware
components, it includes managing the software and
configuration part of the autonomous things.
III. IMPLEMENTATION COMPLEXITY
For autonomous things to function it needs the ecosystem.
Fig. 1 depicts the overall ecosystem. At the lowest layer we
have the autonomous things with different types of
capabilities with various parts of components such as
embedded controller, sensors, actuators and joints [10]. The
middle layer we have edge computing facility that located on
premise nearby the robotics. This help to reduce volumes of
data to transferred to the cloud and enable real-time feedback
[11]. The top layer we have cloud computing which located in
a central location with vast computing and storage capacity.
The intelligent behavior of the autonomous things are
developed and trained in the cloud which capable to handle a
lot of training data and with multiple versions of machine
learning model generation [12]. Once the model has been
baselined it will be pushed to the edge computing or to the
autonomous things for inference processes.
The ecosystem shows how complex to implement
autonomous things, competent personals with vast range of
knowledge from electronic, mechanics, IT system, big data
and artificial intelligent are required. We have identified three
areas of complexity which are discussed below:
A. Development Complexity
Europe’s Robotics 2020 Multi-Annual Roadmap
identified distinctive system abilities that characterizing the
autonomous things [3]. This set of system abilities capture the
important system level performance characteristics of robots.
Major part of the development activities are to develop and
configure the autonomous thing abilities base on the
challenges that they are going to solve.
For each of this autonomous abilities, the developers
require to go through series of repetitive development
processes such as tweaking, testing and simulation to create
suitable machine learning model and application [13]. After
the implementation has been validated and verified in the
simulated environment, the application will be deployed in the
actual setting. At this point another series of testing are
conducted before the works can be baselined and passed to
operate in the production environment.
B. Execution Complexity
The autonomous things execution or functioning also has
its own complexity. The implementation follows the generic
robot control architecture Sense-Plan-Act [14]. All the
processing is not just happen locally inside the robot itself.
Some functions execute at the edge computing devices and
others are inside the cloud computing. The robot senses the
environment though its sensors. The robot collect the sense
data. The data will be gather within itself, or into the edge
computing devices or even into the cloud computing servers.
All depend on the complexity of the analysis and criticality of
the process. For a vital behavior such as failure dependability
this process can happen internally. For not so complex
functioning but require interoperation between heterogeneous
system such as robot to robot interaction ability, the process
can happen inside an intermediary devices nearby the robots.
For more complex but less critical such as cognitive learning
ability this can happen inside the cloud computing.
C. Operation Complexity
The autonomous things operations occur in multiple
control planes. The first one is the physical control plane. At
this layer we have the physical elements such as the robotics
physical components, edge computing devices, cloud server,
storage and networking. The second layer we have the
application control plane. At this layer we have the software
elements such as the various operating systems, libraries,
drivers, database servers, big data software stack, artificial
intelligent software stack and the autonomous things business
logic and workflow itself. The third layer we have the
configuration control plane. At this layer we have the system
configuration information that dictate on how each component
will behave. Some example of configuration elements are, the
machine learning models, data models, database schema and
the system parameter definitions.
All these complexities i.e. various autonomous capabilities
development, multiple layers of control, and different
spectrum of processing from autonomous things, edge
computing and right into cloud computing, create a burden to
anyone that want to implement autonomous things. The
implementer required to have breadth and depth of
knowledge in all components of the ecosystem.
IV. EXISTING PLATFORMS
Before we jump into proposed architecture let us take a
look at existing platforms that having similar goals in
Fig. 1. Cloud Robotics Ecosystem
3. addressing the problem. For the ease of explanation the
information is presents in a table form as table 1 below. For
comparison we choose 3 platforms i.e. Amazon Web Service
(AWS) RoboMaker [15] which is launch in late 2018, Google
Cloud Robotic [16] which will be launch in 2019 and Rapyuta
Cloud Robotics Platform [17][18] which is based on
RoboEarth project [19].
We evaluate identified platforms based on criteria of the
three areas of complexity discussed earlier. For the operation
complexity to ease the work we look into feature that help the
server orchestration, robotic orchestration, application
orchestration and configuration composition. For the
development complexity we look into tools or services that
will help to accelerate the development process. The tools or
services start with development service tools, data curation
platform, machine learning model creation, simulation and
testing. For the execution complexity we look into
middleware or application programming interface,
communication channel and runtime services that help to
integrate the execution decision seamlessly.
TABLE I. ROBOTICS PLATFORM COMPARISON
Criteria
AWS
RoboMaker
Google Cloud
Robotic
Rapyuta
Robotics
Platform
Server
Orchestration
Virtualization Virtualization
Depend on
Service
Provider
Robotic
Orchestration
Robotic
Operating
System +
Extensions
Robotic
Operating
System
Robotic
Operating
System
Application
Orchestration
Fleet
Management
Kubernates Container
Configuration
Composition
Fleet
Management
Helm Package
Manager
Container
Development
Services
Cloud9
App
Management
Catalog
Data Curation
Data Lake
Foundation
[20]
Dataprep [22],
Datalab [23]
Not Available
ML Model
creation
SageMaker
[21]
Cloud ML
Engine [24]
Not Available
Simulation &
Testing
Gazebo Not Available Not Available
Execution
Middleware
ROS, IoT
Greengrass
ROS,
Kubernetes
ROS
Communication
Greengrass
Connectors
Robot Fleet
Connectivity
WebSocket-
based
Runtime
Services
Cloudwatch,
Lex, Polly,
Kinesis,
Rekognition
Core platform
as optional
extensions
RoboEarth
knowledge
Repository
All platforms employ server virtualization to abstract out
the management of the servers and they are using container
based tools to ease the application and configuration
deployment. For the robotic management they are using
Robotic Operating System (ROS) as the main robotic
middleware [25]. Futher look into execution middleware we
discovered AWS IoT Greengrass is using Lambda functions
i.e. serverless technology for their seamless execution [26].
Amazon approached for the platform is to have close knitted
end to end solution. Google Cloud Robotic platform provide
open platform whereby they allow the platform to be extended
to external services from third party service providers [16].
Rapyuta Robotics Platform focus on composing exiting ROS
packages and outsourcing the execution into the cloud [18].
V. PROPOSED ARCHITECTURE
The proposed architecture shows how the system features
and technology stack are derived from the overall business
requirement on the ease of autonomous things
implementation. The proposed architecture presented is based
on the Open Group Architecture Framework (TOGAF). It is
an enterprise architecture framework standard created by The
Open Group organization [27].
The diagram Fig. 2 below shows the Organizational
Conceptual Landscape Map View of the Cloud Robotic
Infrastructure. It starts with high level business architecture
view from the service medium, methods how the users
accessing the system, the users of the system and the core
services the system provides. At the second layer there are
application and data components. They centered on the
application services that are realizing the business requirement
specified in the core services. At the last layer there are
technology components that capture the core technologies that
are enabling the application services.
At this juncture we are going to discuss three aspects of
the architecture i.e. the core services, the application services
and the technology. From the earlier discussion we can
summarized the cloud robotics infrastructure will have four
core services. The first is integrated development, testing and
simulation environment. This core service acts as
implementation dashboard that have all the essential tools
such as development editor, testing instruments and
simulation set-up. The second is the integrated operation,
monitoring and maintenance service that help in post
development activities. Amongst its’ functions are robotics
application deployment and robotics operation life cycle
management. The other two core services are Internal and
third party services. Their functions are to provide additional
services to expedite the development and operation of the
Fig. 2. Organizational Conceptual Landscape Map View of the
Cloud Robotic Infrastructure
4. robotics application. Example of them are AWS Cloudwatch
[28] and Google Stackdriver [29] that are used to monitor and
manage the cloud robotic infrastructure.
The second aspect of architecture is the application
services. This components realized the core services
requirement. Robotic Integrated Development Environment
(IDE) implements the development requirements. Robotic
Operation and Maintenance implements the integrated
operation requirements. Service catalogue and service broker
implement the internal and third party services. The catalogue
provides the list of available services and the broker help to
negotiate and link the services with the robotics applications.
The third aspect of the architecture is the technology. At
this layer we have identified three main technology areas and
they are Cloud Computing, Edge Computing and Robotics. To
manage these three elements seamlessly we need to build an
abstraction layer. This abstraction layer have to support the
distributed nature of these combined technology areas. The
virtualization technology helps in abstracting out the physical
computing component and containerization helps to abstract
out the application runtime and configuration. Robotic
Operating System facilitates the robotic functions. The
Software Defined Network (SDN) and Network Function
Virtualization (NFV) technology help in seamless
communication of the distributed network. The decentralized
file system assists in the unified data access.
VI. CONCLUSION
Autonomous things are here not to replace the human
being. They are helping us to achieve sustainable future
whereby resources to be used in efficient manner. At current
state the proliferation of autonomous things has been
hampered by the complexity of application development and
operation. A super-convergence system help to address this
issue. This paper take a look at existing implementation and
abstracting out the core functions and technologies. In
addition to what has been discussed there are other areas that
are worth to look into such as ethics, security and governance
which also can be hindrance to the autonomous things
implementation.
REFERENCES
[1] K. Schwab, “The Fourth Industrial Revolution: What It Means and
How to Respond,” [Online]. Available:
https://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-
revolution
[2] Y. Liao, E. R. Loures, F. Deschamps, G. Brezinski, and A. Venâncio,
“The impact of the fourth industrial revolution: a cross-country/region
comparison,” Production, vol. 28, no. 0, 2018.
[3] “Robotics 2020 Multi-Annual Roadmap For Robotics in Europe,”
Horizon 2020 Call ICT-2017 (ICT-25, ICT-27 & ICT-28), Release B
Dec. 2016.
[4] J. M. Hollerbach, M. T. Mason, and H. I. Christensen, “A roadmap for
us robotics–from internet to robotics,” Workshop on emerging
technologies and trends., 2009.
[5] D. Küpper, M. Lorenz, C. Knizek , K. Kuhlmann, A. Maue, R. Lässig,
and T. Buchne, “Advanced Robotics in the Factory of the Future,”
[Online]. Available: https://www.bcg.com/en-
sea/publications/2019/advanced-robotics-factory-future.aspx
[6] Gartner, “Gartner Identifies the Top 10 Strategic Technology Trends
for 2019,” [Online]. Available:
https://www.gartner.com/en/newsroom/press-releases/2018-10-15-
gartner-identifies-the-top-10-strategic-technology-trends-for-2019
[7] B. Kehoe, S. Patil, P. Abbeel, and K. Goldberg, “A Survey of Research
on Cloud Robotics and Automation.” IEEE Transactions on
Automation Science and Engineering 12 (2): 398–409, 2015.
[8] O. Saha and P. Dasgupta, “A comprehensive survey of recent trends in
cloud robotics architectures and applications,” Robotics, vol. 7, no. 3,
2018
[9] Wikipedia, “Converged Infrastructure,” [Online]. Available:
https://en.wikipedia.org/wiki/Converged_infrastructure
[10] P. Simoens, M. Dragone and A. Saffiotti, “The Internet of Robotic
Things: A review of concept, added value and applications,” Int. J.
Adv. Robot. Syst., 2018.
[11] W. Shi, J. Cao, Q. Zhang, et al., “Edge computing: vision and
challenges,” IEEE Internet of Things Journal, 3(5): pp. 637–646, 2016.
[12] B. Xu, D. Mylaraswamy, and P. Dietrich, “A Cloud Computing
Framework with Machine Learning Algorithms for Industrial
Applications,” WorldCom ICAI, 2013.
[13] M. Mayo, “Frameworks for Approaching the Machine Learning
Process,” KDnuggets, [Online]. Available:
https://www.kdnuggets.com/2018/05/general-approaches-machine-
learning-process.html
[14] A. Rodney and A. Brooks, “Robust Layered Control System for a
Mobile Robot,” IEEE Journal on Robotics and Automation, vol RA2,
no. 1, March 1986.
[15] Amazon Web Service, “AWS RoboMaker,” [Online]. Available:
https://aws.amazon.com/robomaker/
[16] Google Inc., “Cloud Robotics Core: Kubernetes, Federation, App
Management,” [Online]. Available:
https://googlecloudrobotics.github.io/core/
[17] Rapyuta Robotics, “Rapyuta.io Cloud Robotics Platform,” [Online].
Available: https://www.rapyuta-robotics.com/rapyuta_io
[18] G. Mohanarajah, D. Hunziker, M. Waibel, and R. D'Andrea, “Rapyuta:
A cloud robotics platform,” IEEE Trans. Autom. Sci. Eng. (T-ASE),
vol. 12, no. 2, pp. 481–493, Apr. 2015.
[19] RoboEarth, “RoboEarth,” [Online]. Available:
http://roboearth.ethz.ch/
[20] Amazon Web Service, “Data Lake Foundation on AWS,” [Online].
Available: https://aws.amazon.com/quickstart/architecture/data-lake-
foundation-with-aws-services/
[21] Amazon Web Service, “Amazon SageMaker,” [Online]. Available:
https://aws.amazon.com/sagemaker/
[22] Google Inc., “Cloud Dataprep by Trifacta,” [Online]. Available:
https://cloud.google.com/dataprep
[23] Google Inc., “Cloud Datalab,” [Online]. Available:
https://cloud.google.com/datalab
[24] Google Inc., “Cloud Machine Learning Engine,” [Online]. Available:
https://cloud.google.com/ml-engine/
[25] “ROS (Robot Operating System),” [Online]. Available:
https://www.ros.org/
[26] Amazon Web Service, “AWS IoT Greengrass,” [Online]. Available:
https://aws.amazon.com/greengrass/
[27] The Open Group, “The TOGAF Standard,” [Online]. Available:
https://www.opengroup.org/togaf
[28] Amazon Web Service, “Amazon CloudWatch,” [Online]. Available:
https://aws.amazon.com/cloudwatch/
[29] Google Inc., “Google Stackdriver,” [Online]. Available:
https://cloud.google.com/stackdriver/