Good afternoon to everybody, I´m Angel Conde DevOps & Data Engineer at Ikerlan. This is a joint Work between ULMA Handling Systems and Ikerlan
about how the challenges of Industry 4.0. can be solved using Apache Mesos on the Logistics Domain.
This talk is organised in four main sections.
The first part is an introduction(introdaxion) to Ulma Handling and the industry 4.0.
Later, we will go throughout(thruout) the different components used in the solutio.
Following, the architecture(arkitecture) of the solutions will be described,
And Finally, I will give some brief(brif) conclusions and future work
Next, the three main parts of the system will be depicted
Let´s start with an introduction (introdacxion) to ULMA Handling Systems.
Ulma provides all-round logistic systems. (for example automatic warehouses) It focus on custom(castom) turnkey solutions where the desaing, development , assemble and maintenance is carried out by ULMA. Furthermore, ULMA has worldwide presence
Well let’s see what is an ULMA Warehouse(werehouse),
A warehouse is composed by different distributed elements
For example stacker cranes, elevators and conveyors
However automatiic elements fail sooner or later because phisical failures or logical failures. Software errors, incompabilities during updates, and logical pysical mismatch for example a human can move some piece to another place where it shouldn’t be for the logical program.
While building the Supervisor system the world started to focus
Definitely ULMA wants a SMART WAREHOUSe
This lead to ULMA to build the Supervisor system. The Ulma Supervisor gathers information about operational data and malfunctions in a distributed manner.
This lead to ULMA to build the Supervisor system. The Ulma Supervisor gathers information about operational data and malfunctions in a distributed manner. Valible
For starting to solve those concpets Data from the warehouses should go to the cloud. Then the cloud supervisor concept born
The base code must be the same
That’s why the cloud
The Cloud supervisor has been born.
Each cloud supervisor is respónsible of storing data using custom rules. For example it can decide that an alarm generated by a real supervisor should not bestored. Therefore, this component is stateful being a “real” mirror of what’s happening on the real werehouse.
Finally in some cases we would like to support aggregates, that’s it a cloud supervisor can have data from different real supervisors.
This leads up to the next point
Well its time to focus on the components of the plaform
In order to support the cloud supervisors and store its operational data for later analytics. In this work some requisites had been defined.
Goals to achiviecve our platform
First of all we won like
Apache Mesos is an open-source cluster manager that was developed at Berkley. It provides efficient resource isolation and sharing across distributed applications or frameworks. The software enables resource sharing in a fine-grained manner improving cluster utilization
Let’s see an overview on the Meso pllatform.
In the bottom part we have the psically distributed resources, next we have a distributed file system (for example the hdfs ) , next the Mesos Kernel in charge of the resources. Then frameworks and finally different apps and services.
This lead me to next point, how we build this infrastructure? We start to create Amazon instances and start to installing all kind of things via the shell?
Mesos has been Battle tested on Twitter first.
It has support for clusters up to 10,000 nodes.
It can launch and un any task using Docker contenairezation or cgroups
Infrstructure as code concept
To Launch ainfrastructure — from physical and virtual servers to email and DNS servers in any cloud infrastructure we have used terraform.
Terraform leads as to the ….
For provisioning and upgrading configuration and updates Ansible has been used
Infrstructure as code concept
To Launch ainfrastructure — from physical and virtual servers to email and DNS servers in any cloud infrastructure we have used terraform.
Terraform leads as to the ….
For provisioning and upgrading configuration and updates Ansible has been used
We use domain based redirection
dicaple
For the platform storarge we have chosen HDFS as the tecnhonolog
It’s not have been deployed using the corresponding mesos framework for avoiding possible losses
We have the incoming traffic through the edge nodes where the https is terminated along the
Well its time to focus on the platform architecture
Here in this figure an overview of theh platform can be seen. Different Supervisots are connected to the cloud
Well Spark is used as our analytical / ingestion tool.
Wee use it for both batch and realtime data using the Lambda architecture.
Data ingestion and storage drom the, uses the cluster resources for computing the queriies and we encode the data using
Hdfs backd Recent data table
Real time algorithms
Data is saved to the stagin HDFS directory for later compaction
Uncompacted partitioned parque (date / by supervisor)
The final component of the platform is the one called the compactor.
This component is respnsible for HDFS file compaction and is executed by the Chronos framework . Chronos is a Mesos framework rresponsible of batch tasks.
The compactor uses Kite (kaite) project for this task where we have two partitioned folders. The stagin and the compacted one
For the final part of the presentation, some conclusions and future work will be presented
Let’s start with some conclusions(conclusions),
Remote monitoring has already been valuable for the clients in order to provide remote maintenance.
The collected data is available 24x7.
The developed platform is easily deployable on any cloud provider, it has a efficient resource usage mixing big data workloads with custom services.
Analytics can be done using standard SQL over recent and cold data .
Finally I would like to depict some future work we would like to address:
- As Mesospehere has open sourced DC/OS we would like to “move” in order to have commercial support
- We would also like to explore the oversubscription resources on mesos in order to improve the usage of the resources.
- Evaluate Cassandra as backend storage to avoid the HDFS compaction problem
- More ULMA Handling software on the platform, hopefully stateful services or databases whether stateful apps system matures.
We have come to the end of the presentation. I’d just like to thank(thenk) you for listening and would be pleased to take your comments and questions now.
And if you want to get in touch just drop me an email.
After going throughout the requisites let’s review the historically infrastructure archetipes.
First of all we have the On-premises, later on Proprietary Hyperscale, after the cloud arisen and finally…..