1. Operator-Less Datacenters: Containerization, Micro-services
with Artificial Intelligence
Alexa, can you please write an article for me. I need to submit it by EOD. Yes, share me the topic and the
template to choose from, I'll write it for you, said Alexa.
Wondering who Alexa is?
Well, if you are following recent technologies advancement, you already have figured it out who Alexa
is. It is a voice controlled AI interface which works for Amazon smart speakers "ECHO" and responds to
the name "Alexa". It is just an example of the greater things which we can do with the help of machine
language / AI. Imagine a IDC / Data Center without the conventional Data Center operators and there-by
reducing the manual intervention & the commercials involved. Yes, it can surely be a possibility by
making use of Micro-Services along with machine learning/ AI in cloud.
In this post, we will take a look at some of the evolving trends in the industry and its significance in
obtaining and re-defining the definition of DC’s.
Conventional Data-Centers
In a traditional IT approach, the technology & its infra-structure capabilities are housed on-site/on-
premises. The company, take the acquisitions of hardware and software that are necessary to meet the
technology needs of the business and hired IT professionals to take care of any technology changes or
upgrades, and perform any troubleshooting that needs to happen if something goes wrong.
2. Over-Heads of Conventional DC
1. Hardware and Software procurement: When you deploy a new service, traditional IT requires that
you purchase a new program for that service. Your company pays an upfront cost, and then you own
this program. Hired system admins install the program onto the computers, which can take time
depending on the type of the service to be deployed as well as the size and schedule of your system
admins.
2. Upgrades Dependencies: Any hardware or software upgrades, you have to own it and its your IT
team, who spend a significant amount of time administrating an upgrade.
3. Scalability Delay: In case to scale up or down the IT infrastructure, IT administrators need time to
place the hardware servers into the DC [may be few weeks] which makes the whole process slow and
bulky.
4. High Operational Cost: The cost for running an on-site DC is pretty high which not only involves cost
of servers, rack-spaces etc. but also for non-IT hardware such as electricity, Cabling, cooling etc.
5. Audit and Compliance over-head: For audit and ISO compliance purposes, the recurring cost of
getting the DC certified periodically is one of the major burden an IT firm will have to bare in case of
conventional DC’s.
One of an alternative we have, is to out-source Data-Center services to a LOW COST model -- Cloud. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs
during off-peak periods as well, there-by making it economical, highly scalable and free from compliance
/ upgrade dependencies.
Platform: Cloud
Cloud is a platform which is logically built, hosted and deployed over the Internet. It possesses and have
similar capabilities and functionality to a typical server. There are number of cloud providers like AWS,
GCP, Azure, Alibaba Cloud etc.
3. Cloud servers can be customized to suite levels of performance, security and control similar to those of a
dedicated server. There are also some of the recommended / pre-defined category of servers also
available. These servers can be configured as per business requirement. Cloud deliver us
services/applications, service made available to users on demand via the Internet from a cloud
computing provider's servers.
These services can be made independently deployable, small, modular in nature in which each of it runs
a unique process and communicates through a well-defined, lightweight mechanism to serve a business
goal, and better known as Micro-Services.
Micro-Services
Micro-Services are a type of service-oriented architecture where applications are built as a bundle of
dissimilar lesser services rather than one whole app. Instead of a monolithic app, you have several small
independent applications that can run on their own to deliver the same business logic as done by one
Big/monolithic application. You can think of a micro-service as similar to a Linux process, it’s designed to
do one thing very well, runs only when and as long as needed, and combines with others to work
together on collective data.
Micro-service runs on Containers. Containers, as the name suggests, is a box in which an application
with its run-time stats can be run as a separate entity. In short, it offers a logical packaging of
applications. It runs on a host operating system and make a run-time env. in which all the dependencies
of an app to run is resolved, and then, contouring it in such a way that is becomes independent of the
host OS and started to behave like a separate app entity all together.
4. Why Containers?
We can choose to have virtualization technique instead of using containerization, a lot of people might
think that way. But imagine if the same can be delivered through a much smaller, light-weight box at a
much faster rate. In place of virtualization which involves hardware stack cloning, container packs
operating system binaries, essential for an app to run, with multiple containers running atop the OS
kernel directly. There is one over-head I can think of using containers i.e. need to manage its lifecycle
and orchestration.
Docker is a popular, open-source container format.
Some of the benefits of using containers are listed below:
a> It is platform independent, once it is build, run it any-where.
b> Since, containers don’t require a separate OS, they use up very less system resources, hence
increases efficiency & better utilization of resources.
c> Although, you can run many containers on same server, but none of the container interact with
each other. Even if, one application crashes, other containers with the same application will
keep running flawlessly. It provides application isolation and decreases security risks.
d> Containers images are light weighted and start in less than a second since they do not require an
operating system boot. Creating, replicating or destroying containers is also just a matter of
seconds, thus greatly speeding up the development process.
e> Containers ensure that applications run and work as designed locally. The eradication of
environmental inconsistencies makes testing and debugging less complicated and less time-
consuming. You actually can have a PROD like scenario on your own laptop as well without
investing much resources.
We can start using containers to create systems of micro-services, and have to find a way to organize
them. This can be done with the help of: schedulers and orchestrators. Schedulers are about selecting a
node to run a job on. Scheduling includes matching the needs of the job to the capabilities of the
machine.
Orchestration will tell how everything can work together. Orchestrators also consider about networking,
scaling and responding to failure.
Automating Orchestrators -- AI
How about automating the orchestration solution? If we’ll be able to develop such automation, then the
concept of operator-less DC’s might into existence and that too, in the very near future. I can think of
doing it with agents. These agents will have some basic rules built into it. These rules will control how
the agent will heal from error, increase/decrease in capacity, and what role it should take in the overall
system.
5. Fig: Joyent’s Container Pilot.
The above schematic is for Joyent’s ContainerPilot. In this example, a system has been designed with the
help of autonomous cloud, governed and provisioned by orchestration tools. There is an agent
composed of a micro-service process, a container and an adaptation layer. It is through this layer that
agents communicate, via a global state, to the other agents in the system.
However, these agents are smart or not, it will be based how we actually drive them via coding, rules
etc. And then, when these agents interact with each other, providing you the desired output without
manual intervention, they will and should create the illusion of intelligence.
A good example here is Conway’s Game of Life.
Conway’s Life is an example of a Complex Adaptive System (CAS). This system is composed of
autonomous agents in which each agent follows a specific assigned rule to it. The agents interact and
intelligence emerges from the system.
Possibility? Yes, this may sound like science fiction, but it really can be materialized using orchestration
automation with artificial intelligence.
Benefits using AI:
For implementing the AI solutions, we need a “lots of computing power” with “lots of data” and
questioning the mind and analysis on it.
Once the question is posed, the machine runs lots of simulations. The output of one simulation can be
cascaded into the next one. This is how machines learn.
6. Industry trends are showing that by taking up artificial intelligence (AI) technology, we can reduce
operational costs, increase efficiency, grow revenue and improve customer experience. Some of its
benefits are listed below:
1. Increase in Productivity: By automating the routine tasks, we actually can save time and money.
Argentina-based credit firm oMelhorTrato.com uses AI to automation. Their HR department used to
spend around 2/3rd
of their time manually reading the CVs—time that has been freed up due to the
usage of AI. Now, 3 months after implementing the technology, productivity has increased by 21.3%,
according to Rennella. [Company HR Director and Co-Founder]
2. Make faster business decisions based on outputs from cognitive technologies.
HANA is SAP’s cloud platform which is used by companies to manage databases of information they
have collected. This AI replicates and ingests structured data, such as sales transactions or customer
information, from relational databases, apps, and other sources.
The projected benefits of using machine learning platforms for business intelligence include
infrastructure cost reductions and operational efficiency. In a report, 10 organizations that use HANA
said they expect to realize an average five-year, return on investment of 575%. They got benefitted by
$19.27 million per organization, compared with investment of $2.41 million over five years by using
HANA. [Source: International Data Corp., a subsidiary of IDG]
3. Avoid mistakes and 'human error', provided that smart systems are set up properly.
The best example of this, is in High-End cars like Mercs, BMW, in which companies have set-up Collision
Prevention Assist. You’ll get several seconds to respond to a warning light in the instrument cluster if
you get that little bit too close to the vehicle in front, with an intermittent tone sounding if the distance
decreases quickly. In case of human sense is failing, AI is helping it, thus reducing the probability of
causing any mishap.
Similarly, with the use of Machine learning, we can develop a robust system. From the traditional
systems and infrastructure technology where we know only what happened, we need to develop a
predictive system probably in three classes:
a> Predictive system –: These type of system will continuously find patterns in past network data
and use them to predict future behavior. ML can help in analyzing various factors which may
thought to be impactful, like time/day/hours, network events to be happened, or one-time or
recurring external events or factors
7. b> Predictive and Prescriptive system: These type of system will use the AI algorithms to know
how to fix the issue. It first, analyze the past patterns, observe performance and correlate with
the ideal state and, independent of human direction, identify undiscovered correlative factors
affecting future performance outside the guidance of human logic.
c> Self-Healing System: With the combination of class a and class b systems, we can develop a
system which will predictive the “what is going to happen, where the probability is higher of
having an issue and with AI with ML algorithms, can overcome the fault by itself without any
intervention of human.
4. Increase in up-time and availability of the applications over cloud.
Downtime is one of the costliest events for data center operators and their clients. If reports to be
believed, it has reported that the average cost of an infrastructure failure is US$100,000 per hour. In
the case of critical application failure that figure increases to US$500,000 up-to $1 million. In 2014,
Google started employing AI, to overcome this problem and track the applicable variables and calculate
maximum efficiency at its server farms. The algorithms developed from the data collected is now used
globally by operations teams for optimal performance.
Lit-Bit developed an artificial intelligence software that observe and learns to magnify the availability
and reach of those teams physically walking the floors. This software cum sensors, in turn, become a
natural extension to the technicians walking the data center floor – adding a layer of automation with
the potential to identify and solve a problem, and the ability to predict failures before they happen.
An Amazon AWS model of ML has been introduced with the capabilities of auto-scaling: Amazon
SageMaker. With SageMaker, it is easier to manage production ML models, with Auto Scaling. Rather
having to manually manage the no. of instances to match the scale that you need for your inferences,
you can now use SageMaker automatically scaling capabilities to match the number of instances based
on an AWS Auto Scaling Policy.
5. More Data-Centre Power/Energy Efficiency: With the more and more adoption and inclination
towards the cloud and cost efficient systems, is accelerating the growth of largescale data
centers (DCs). As the hardware affordability increased due to cloud infrastructure services & the
tremendous exponential growth of Big Data, the modern Internet company encompasses a wide
range of characteristics including personalized user experiences and minimal downtime.
To power such DC’s, we also need to establish and contemplate on sourcing and techniques
which will save us energy / power consumptions. Machine learning techniques implying AI can
help in this sphere as well.
8. ML is best suited for the Data-Centre environments keeping in mind the complexity of
operations and the plenty of existing monitoring data. The no. of probable combinations of DC’s
gear and their set-point values makes it difficult to determine where the optimal efficiency lies.
To address these challenges, a neural network is selected as the mathematical which searches
for patterns and interactions between features to automatically generate a best-fit model.
6. Grow expertise by enabling analysis and offering intelligent advice and support.
IBM WATSON is the current and widely-used AI, helping the organization to enable smart and intelligent
analysis of structured and un-structured data.
Fig: Flow chart for IBM Watson Work
It can bifurcate the human languages to recognize inferences between text passages with human-like
accuracy, and at speeds and scale that are way far faster and bigger than any person can do on their
own. It can maintain a high level of accuracy when it comes to understanding the correct answer to a
question.
Customer vision collected from unstructured data can lead to new businesses, hence more money
inflow, improved customer satisfaction, and a greater competitive edge in the industry. Out think your
competitors by becoming a Cognitive Business.
All the above benefits will certainly help us in achieving the fully automated systems/applications.
Micro-services orchestration along with Artificial intelligence using Machine Learning can prove to be a
stepping stone on how we look to the possibility of operator-less DC’s.
9. Some of the ongoing trends are also showing the industry inclination towards the AI and ML.
Ongoing Trends and Predictions
1. $5+ billion AI/ML tools market by 2020: According to various studies done by the research
firms, it has been estimated, the AI market can touch $5.05 billion by 2020, thanks to the rising adoption
and emergence of tools based on machine learning and natural language synthesis.
2. More devices with AI support: About 6 to 8 billion devices based on internet of things
[appliances, cars, wearables] will be actively requesting support from AI platforms by 2018. (Source:
Gartner)
3. Today, just 15% of enterprises are using AI. But 31% of the industries representatives said it is on
the agenda for the next 12 months and rest of it is in developing and implementation phase.
(Source: Adobe)
4. The next big marketing trend when surveyed, yielded results which are identified as consumer
personalization (29%), AI (26%), and voice search (21.23%). It shows that AI is more pervasive and
prominent than respondents realize. (Source: BrightEdge)
5. The influence of AI along with ML technologies on business is projected to increase labor
productivity by up to 40% (Source: Accenture)
6. Introduction of text-to-speech AI will be more and more, Apps using voice to communicate with
end-users are becoming more common every day. Look at the Apple’s SIRI or GOOGLE’s “OK
GOOGLE”. Whether it is an automated home speaker like Google Home or Amazon echo, voice
operated processes will take the center stage in coming future.
In the same context, Amazon has launched “POLLY”. It is a service which converts text into speech
and offers 47 lifelike voices in 24 different languages. It supports Speech Synthesis Markup Language
(SSML) tags like prosody so you can adjust the speech rate, pitch, or volume.
A business might use it in Gaming portals, e-learning applications and so-on. It can lead to grow
virtually a new brands of speech products which we might have thought of but never practiced into
reality.
But as I’m putting forward the benefits of AI, I also want to list out some of the grey factors which
may play a vital role in deciding the future development of artificial intelligence.
a> AI can be driven to do something devastating and that too to scale of no return. Autonomous
weapons are artificial intelligence systems that are programmed to kill. And could easily cause
mass casualties.
b> The AI is programmed to do something beneficial, but it develops a destructive method for
achieving its goal.
10. c> It would replace the human work force which definitely will lead to un-employment and hence
ultimately will hit the commercial symmetries of a nation.
Conclusion
In my view, I’m pretty sure, in the near future, I’ll directing an AI for my daily tasks and probably just
commuting in a self-driven eco-friendly car. Automating cloud environment by using Intelligent agents
using orchestration technique.
The next stepping stone in automation of data centers will be driven by micro-services& containers,
which will be using AI along with ML. Each innovation, each recombination nudges us one step closer to
the operator-less data center. We will have to come to terms with the fact that what we are creating
should ease everyone life but it also will dislodge thousands of workers who in absence of employment
and money, can create a society imbalance. This is why we cannot (and should not) talk about our work
without considering the wider, societal effects.
Like the saying “Come greater powers comes greater responsibility.”
References:
1. Joyent Container Pilot: https://www.joyent.com/containerpilot
2. Conway Life Game: http://www.conwaylife.com/
3. Docker: https://docs.docker.com/
4. News-Letter: https://www.techrepublic.com/article/the-automated-office-8-ways-companies-
are-using-ai-to-increase-productivity/
5. News-Letter: https://www.techemergence.com/ai-in-business-intelligence-applications/
6. News-Letter: http://rootdatacenter.com/artificial-intelligence-in-data-centers-for-uptime/
7. IBM WATSON: https://www.ibm.com/watson/products-services/
8. Amazon Polly: https://aws.amazon.com/polly/faqs/
11. Detail of the Author:
Name: Kishore Kumar
Company: Sapient RazorFish [Publicis.Sapient]
Team: Cloud and DevOPs Practice
Email: kkumar34@sapient.com