1. The document proposes a heterarchical approach using intelligent cyber-physical products to schedule jobs in a partially flexible manufacturing system.
2. Under the proposed approach, each cyber-physical product uses distributed decision-making and learning to determine the optimal sequence of required services from flexible resources to complete its production.
3. Experimental results on three case studies show the proposed approach using cyber-physical products provides better performance than other traditional scheduling methods in terms of key metrics like average waiting time and completion time.
Kernel Recipes 2017 - Modern Key Management with GPG - Werner KochAnne Nicolas
Although GnuPG 2 has been around for nearly 15 years, the old 1.4 version was still in wide use. With Debian and others making 2.1 the default, many interesting things can now be done. In this talk he will explain the advantages of modern key algorithms, like ed25519, and why gpg relaxed some of its more paranoid defaults. The new –quick commands of gpg for easily scriptable key management will be described as well as the new key discovery methods. Finally hints for integration of gpg into other programs will be given.
Werner Koch, g10code
Linux 4.x Tracing: Performance Analysis with bcc/BPFBrendan Gregg
Talk about bcc/eBPF for SCALE15x (2017) by Brendan Gregg. "BPF (Berkeley Packet Filter) has been enhanced in the Linux 4.x series and now powers a large collection of performance analysis and observability tools ready for you to use, included in the bcc (BPF Complier Collection) open source project. BPF nowadays can do system tracing, software defined networks, and kernel fast path: much more than just filtering packets! This talk will focus on the bcc/BPF tools for performance analysis, which make use of other built in Linux capabilities: dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). There are now bcc tools for measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived, built in to Linux."
Talk for USENIX LISA17: "Containers pose interesting challenges for performance monitoring and analysis, requiring new analysis methodologies and tooling. Resource-oriented analysis, as is common with systems performance tools and GUIs, must now account for both hardware limits and soft limits, as implemented using cgroups. A reverse diagnosis methodology can be applied to identify whether a container is resource constrained, and by which hard or soft resource. The interaction between the host and containers can also be examined, and noisy neighbors identified or exonerated. Performance tooling can need special usage or workarounds to function properly from within a container or on the host, to deal with different privilege levels and name spaces. At Netflix, we're using containers for some microservices, and care very much about analyzing and tuning our containers to be as fast and efficient as possible. This talk will show you how to identify bottlenecks in the host or container configuration, in the applications by profiling in a container environment, and how to dig deeper into kernel and container internals."
Embedded Recipes 2018 - Finding sources of Latency In your system - Steven Ro...Anne Nicolas
Having just an RTOS is not enough for a real-time system. The hardware must be deterministic as well as the applications that run on the system. When you are missing deadlines, the first thing that must be done is to find what is the source of the latency that caused the issue. It could be the hardware, the operating system or the application, or even a combination of the above. This talk will discuss how to determine where the latency is using tools that come with the Linux Kernel, and will explain a few cases that caused issues.
Kernel Recipes 2018 - KernelShark 1.0; What's new and what's coming - Steven ...Anne Nicolas
Ftrace is the official tracer of the Linux kernel. It was added in 2008, and in 2009 came trace-cmd which was a command line tool that would make interaction with ftrace easier. Shortly after that, KernelShark was created as a GUI for trace-cmd interface. But as KernelShark and trace-cmd were mostly side projects, there wasn't as much activity that they deserved. trace-cmd was updated more often, but KernelShark has suffered with bit-rot for some time. But all that has changed recently as VMware has active developers working on it.
KernelShark has been completely rewritten from scratch and version 1.0 is due to be released in August of 2018 (has already been released as of this talk). This will discuss what changed, how to use the new tool and what is coming in the future.
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
Performance Wins with BPF: Getting StartedBrendan Gregg
Keynote by Brendan Gregg for the eBPF summit, 2020. How to get started finding performance wins using the BPF (eBPF) technology. This short talk covers the quickest and easiest way to find performance wins using BPF observability tools on Linux.
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
re:Invent 2019 BPF Performance Analysis at NetflixBrendan Gregg
Talk by Brendan Gregg at AWS re:Invent 2019. Abstract: "Extended BPF (eBPF) is an open source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix's Brendan Gregg tours BPF tracing capabilities, including many new open source performance analysis tools he developed for his new book "BPF Performance Tools: Linux System and Application Observability." The talk includes examples of using these tools in the Amazon EC2 cloud."
Kernel Recipes 2017 - Modern Key Management with GPG - Werner KochAnne Nicolas
Although GnuPG 2 has been around for nearly 15 years, the old 1.4 version was still in wide use. With Debian and others making 2.1 the default, many interesting things can now be done. In this talk he will explain the advantages of modern key algorithms, like ed25519, and why gpg relaxed some of its more paranoid defaults. The new –quick commands of gpg for easily scriptable key management will be described as well as the new key discovery methods. Finally hints for integration of gpg into other programs will be given.
Werner Koch, g10code
Linux 4.x Tracing: Performance Analysis with bcc/BPFBrendan Gregg
Talk about bcc/eBPF for SCALE15x (2017) by Brendan Gregg. "BPF (Berkeley Packet Filter) has been enhanced in the Linux 4.x series and now powers a large collection of performance analysis and observability tools ready for you to use, included in the bcc (BPF Complier Collection) open source project. BPF nowadays can do system tracing, software defined networks, and kernel fast path: much more than just filtering packets! This talk will focus on the bcc/BPF tools for performance analysis, which make use of other built in Linux capabilities: dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). There are now bcc tools for measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived, built in to Linux."
Talk for USENIX LISA17: "Containers pose interesting challenges for performance monitoring and analysis, requiring new analysis methodologies and tooling. Resource-oriented analysis, as is common with systems performance tools and GUIs, must now account for both hardware limits and soft limits, as implemented using cgroups. A reverse diagnosis methodology can be applied to identify whether a container is resource constrained, and by which hard or soft resource. The interaction between the host and containers can also be examined, and noisy neighbors identified or exonerated. Performance tooling can need special usage or workarounds to function properly from within a container or on the host, to deal with different privilege levels and name spaces. At Netflix, we're using containers for some microservices, and care very much about analyzing and tuning our containers to be as fast and efficient as possible. This talk will show you how to identify bottlenecks in the host or container configuration, in the applications by profiling in a container environment, and how to dig deeper into kernel and container internals."
Embedded Recipes 2018 - Finding sources of Latency In your system - Steven Ro...Anne Nicolas
Having just an RTOS is not enough for a real-time system. The hardware must be deterministic as well as the applications that run on the system. When you are missing deadlines, the first thing that must be done is to find what is the source of the latency that caused the issue. It could be the hardware, the operating system or the application, or even a combination of the above. This talk will discuss how to determine where the latency is using tools that come with the Linux Kernel, and will explain a few cases that caused issues.
Kernel Recipes 2018 - KernelShark 1.0; What's new and what's coming - Steven ...Anne Nicolas
Ftrace is the official tracer of the Linux kernel. It was added in 2008, and in 2009 came trace-cmd which was a command line tool that would make interaction with ftrace easier. Shortly after that, KernelShark was created as a GUI for trace-cmd interface. But as KernelShark and trace-cmd were mostly side projects, there wasn't as much activity that they deserved. trace-cmd was updated more often, but KernelShark has suffered with bit-rot for some time. But all that has changed recently as VMware has active developers working on it.
KernelShark has been completely rewritten from scratch and version 1.0 is due to be released in August of 2018 (has already been released as of this talk). This will discuss what changed, how to use the new tool and what is coming in the future.
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
Performance Wins with BPF: Getting StartedBrendan Gregg
Keynote by Brendan Gregg for the eBPF summit, 2020. How to get started finding performance wins using the BPF (eBPF) technology. This short talk covers the quickest and easiest way to find performance wins using BPF observability tools on Linux.
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
re:Invent 2019 BPF Performance Analysis at NetflixBrendan Gregg
Talk by Brendan Gregg at AWS re:Invent 2019. Abstract: "Extended BPF (eBPF) is an open source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix's Brendan Gregg tours BPF tracing capabilities, including many new open source performance analysis tools he developed for his new book "BPF Performance Tools: Linux System and Application Observability." The talk includes examples of using these tools in the Amazon EC2 cloud."
Senthilkanth,MCA..
The following ppt's full topic covers Operating System for BSc CS, BCA, MSc CS, MCA students..
1.Introduction
2.OS Structures
3.Process
4.Threads
5.CPU Scheduling
6.Process Synchronization
7.Dead Locks
8.Memory Management
9.Virtual Memory
10.File system Interface
11.File system implementation
12.Mass Storage System
13.IO Systems
14.Protection
15.Security
16.Distributed System Structure
17.Distributed File System
18.Distributed Co Ordination
19.Real Time System
20.Multimedia Systems
21.Linux
22.Windows
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Are you using the fastest query tool for Hadoop? Provide and discuss the latest performance results of the industry standard TPC_H benchmarks executed across an assortment of open source query tools such as Hive (using MR, TEZ, LLAP, SPARK), SparkSQL, Presto, and Drill. Additionally, the performance tests will utilize a variety of data sizes and popular storage formats such as ORC, Parquet and Text and compression codecs.
A fuzzy model based adaptive pid controller design for nonlinear and uncertai...ISA Interchange
We develop a novel adaptive tuning method for classical proportional–integral–derivative (PID)
controller to control nonlinear processes to adjust PID gains, a problem which is very difficult to
overcome in the classical PID controllers. By incorporating classical PID control, which is well-known in
industry, to the control of nonlinear processes, we introduce a method which can readily be used by the
industry. In this method, controller design does not require a first principal model of the process which is
usually very difficult to obtain. Instead, it depends on a fuzzy process model which is constructed from
the measured input–output data of the process. A soft limiter is used to impose industrial limits on the
control input. The performance of the system is successfully tested on the bioreactor, a highly nonlinear
process involving instabilities. Several tests showed the method's success in tracking, robustness to noise,
and adaptation properties. We as well compared our system's performance to those of a plant with
altered parameters with measurement noise, and obtained less ringing and better tracking. To conclude,
we present a novel adaptive control method that is built upon the well-known PID architecture that
successfully controls highly nonlinear industrial processes, even under conditions such as strong
parameter variations, noise, and instabilities
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
cpu scheduling bassically tell us about the outer structure or the managemnet of the computer tha how it is done ,it bassically tell us about how our cpu is scheduled.
CPU Scheduling is the process through which we can find the best way to check the shortest and fastest working. Different Algorithms are explained here in this chapter. First-come-first-servers, Shortest Job First, Shortest remaining time first,
Round Robin, Priority Scheduler.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Water Industry Process Automation and Control Monthly - May 2024.pdf
Cyber-physical system with machine learning (Poster)
1. A novel approache for Cyber-Physical Manufacturing Systems optimization:
A heterarchical architecture with ditributed learning effect
Bouazza Wassim¹¯² · Sallez Yves²
¹LIO, University of Oran 1, 1526 El Mnaouer, Oran Algeria - ²LAMIH, UVHC, F-59313, Valenciennes, France
CONTEXTE PARTIALLY FLEXIBLE JOB-SHOP SCHEDULING PROBLEM
Proposed approach
(1) All SPs are assumed to be available at time 0.
(2) All CPPs arrive dynamically from time 0.
(3) Each CPP is assumed to have a priority (or critical-ity) that is a priori fixed.
(4) Each CPP requests a set of services, one at a time.
(5) Each SP has an input queuing zone, which is as-sumed to be infinite.
(6) Each SP can process only one service at a time.
(7) Once a service begins on an SP, it cannot be inter-rupted.
(8) CPP inter-resource and inter-cell transportation times are not considered.
(9) The availabilities and characteristics of SPs are supposed to remain
unchanged.
ASSUMPTIONS
Scheduling
Constraints
Dynamic job arrivals
Family-dependent
setup time
Family-dependent
processing times
Across different
partially flexible cells
!
With IoT and cyber-physical technologies, factories are up-
grading to Industry 4.0.
High flexibility of modern production systems involves
more complex issues with regard to scheduling production
jobs.
The particular case of partial flexibility makes the schedul-
ing more difficult, complicates the search space, and in-
creases the computation time [1].
This work proposes to deal with Partially Flexible Job-shop
Scheduling Problem using a heterarchical approach based
on intelligent cyber-physical products (CPPs).
j
Physical
Manufactured product
PhysicalLevelLogicalLevel
Cyber-
physical
Product
j
Decisional part Physical Product Resources
Service
Provider
Cell#2
Stage#2
Chain of
services
Srv1j
Srv2j
…
SrvIj
CSf
Key symbol
D
D
D
D
D
D
- The “physical” level, composed of physical products and re-
sources (e.g. machines).
- The “logical” level, which contains the computational entities
associated with the resources and products, respectively, man-
aging interactions to support the manufacturing process
Post-Decisional Evalua�on
Context Analysis & Iden�fica�on
Reinforcing
Selec�ng SP
Cyber-Physical Product
Scheduling applica�on
SPSR choice
Process Controller
2
3
Manufacturing
Informa�on System
Applying DR
DR Selec�on
5
Stochastic
parameters
Assignment Module
Services Chains
Database
Experiences
Database
Sequencing Module
Scheduler
Stochastic
parameters
Q1 Table
Q2 Table
Context
Chosen SPSR
Selected SP
Jobs sequence
Chosen DR
Wai�ng for service comple�on
Current service
1
4
6
7
A
B
For each service
8
1. The CPP uses the Service Chain Database to load the ordered list of services
corresponding to its product family.
2. According to the chain of services,
the current service is selected
3. At the required cell, the CPP gathers informa�on
from its local environment (e.g. IDen�fier, priori�es,
arrival �mes, and families, queued jobs, processing
and setup �mes).
The contextualiza�on module examines and iden�-
fies the current context.
4. The scheduler module divides the decisional process
into two steps: (A) assignment and (B) sequencing.
6. Once the new scheduling order has been sent,
the CPP then waits for the service to be completed.
8. The CPP refers back to the chain of services:
if it is not empty, the Process Controller triggers
a new decisional cycle. Otherwise, the CPP is
completely manufactured
5. To apply the resul�ng job sequence, orders
are sent to SPs to update the queues.
- Full
- Partial
- Single Machine
Flexibility (FCi )
- Without
- Homogenous
- Heterogeneous
Homogenous -
Resource-dependent -
Family-dependent -
Operating Time (PTCi) Setup Time (STCi)
7. The CPP then evaluates the decisions made
previ-ously by calcula�ng a Reward Func�on.
The values in the Q1 and Q2 tables are then
updated to save this post-decisional evalua�on
in the Knowledge Data-base.
Algorithm : Performance Indicator Pt
Output: value of P at instant t
1: Initially P=0 and y0=0
2: for j=1 to j=J
3: for all services Srvij of CPPj
4: if Cij ≤ t
5: P+=wiWij
6: yt+=1
7: end if
8: end for
9: end for
10: P/=yt
11: return P
Conclusion >>
In the present work, we were interested in partially flexible problems
with family-dependent setup and processing times. These complex
problems were directly inspired by real cases met in the pharmaceutical
and food industries.
These encouraging results open up interesting prospects. First, con-
cerning machine learning, it would be interesting to add an offline
learning phase for faster convergence toward efficient behaviour. Fur-
ther work must also be carried out to develop more effective contextu-
alization by introducing a specific context for each decisional step.
Some dynamic events such as breakdowns or maintenance tasks and
some constraints such as transportation times must be handled in the
future as well.
Algorithm WAWT AWT Cmax ∑sj ∑sT ∑pT
Case
study #1
Q-Algorithm (Best) 5.02 0.5268 21004 4325 3465 31984
SJF+LQE 5,94 0.5607 21005 4348 5173 37033
FIFO+LQE 5,94 0.5607 21005 4348 5173 37033
HPF+SST 6.83 0.7469 21004 1167 2033 39117
Q- Algorithm (Average) 6.11 0.6328 21006.33 5628.56 4856.68 38557.46
Case
study #2
Q-Algo (Best) 4 0.5196 21016 2450 5118 41191
SJF+LQE 6.09 0.5972 21012 6953 14947 43648
HPF+SST 4.19 0.5193 21019 2451 5120 41191
FIFO+LQE 6.1 0.5994 21012 6963 14957 43748
Q- Algo (Average) 4.20 0.6240 21015.33 3899.11 5248.62 41258.97
Case
study #3
Q-Algorithm (Best) 5.91 0.5835 21003 2594 2289 37755
SJF+LQE 6.49 0.6028 21005 4657 5657 37696
HPF+SST 7.17 0.7758 21004 2259 2360 37851
FIFO+LQE 6.18 0.5811 21005 4528 5437 37577
Q- Algorithm (Average) 6.32 0.6981 21006.64 3651.68 2984.12 37125.52
RESULTS
SST 3288
SST 16979
SST 3180
SPT 20141
SPT 2245
SPT 2203
LQE 2209
LQE 7937
LQE 20181
SQ 3777
SQ 2254
SQ 3851
0 5000 10000 15000 20000 25000
Case #3
Case #2
Case #1
LIFO 2148
LIFO 2156
LIFO 2204
HPF 2171
HPF 22787
HPF 2187
SJF 2269
SJF 2236
SJF 22875
FIFO 22827
FIFO 2236
FIFO 2149
0 5000 10000 15000 20000 25000
Case #3
Case #2
Case #1
Machine Selection Rules distibution Dispatching Rules distibution
Performance indicators synthesis
I. Kacem, S. Hammadi, and P. Borne, ‘Ap-
proach by localization and multiobjec-
tive evolutionary optimization for flexi-
ble job-shop scheduling problems’, IEEE
Trans. Syst. Man Cybern. Part C (Applica-
tions Rev., vol. 32, no. 1, pp. 1–13, Feb.
2002.
Experimental Data
Number of CPPs: J=10500, j ∈ [1... 10500]
Number of services: I=4, i ∈ [1... 4]
Total number of services requested: 29415
Number of families: F=9, f ∈ [1...9]
Priority range: wj ∈ [1...20]
CPP arrival times: Aij ∈ [1… 20999]
CPP arrival rate: 1 CPP per 2 time units
Multi-Agent
Simulator
Processing and setup times for case study #1 Processing and setup times for case study #3
f 1 2 3 4
Cell1
SP1
P1 2 2 2 2
S2 1 1 1 1
SP2
P 2 2 2 2
S 1 1 1 1
SP3
P 2 2 2 2
S - - - -
Cell2
SP4
P 2 2 x 2
S - - x -
SP5
P 1 1 x 1
S - - x -
SP6
P 2 2 x 2
S - - x -
Cell3
SP7
P 2 2 2 x
S 2 2 2 x
SP8
P 3 3 3 3
S 2 2 2 2
SP9
P 2 2 2 x
S 3 3 - x
Cell4
SP10
P 2 x 2 x
S 1 x 1 x
SP11
P 1 x 3 x
S 1 x - x
SP12
P 2 x 2 x
S - x - x
1Processing Time 2Setup Time
f 1 2 3 4 5 6 7 8 9
Cell1
SP1
P1 5 5 5 5 5 5 5 5 5
S2 2 2 2 2 2 2 2 2 2
SP2
P 5 5 5 5 5 5 5 5 5
S 2 2 2 2 2 2 2 2 2
SP3
P 1 1 1 1 1 1 1 1 1
S - - - - - - - - -
Cell2
SP4
P 2 2 x 3 3 3 3 x x
S 2 2 x 2 2 2 2 x x
SP5
P 2 2 x 3 3 3 3 x x
S 4 2 x 2 3 3 4 x x
SP6
P 3 3 x 2 2 2 2 x x
S - - x 5 - 1 - x x
Cell3
SP7
P 2 2 2 x x x x x x
S 2 2 2 x x x x x x
SP8
P 3 3 3 3 x x x 2 x
S 2 2 2 2 x x x 2 x
SP9
P 2 2 2 x x x x x x
S 3 3 - x x x x x x
Cell4
SP10
P 2 x 2 x 5 6 x x 9
S 6 x 2 x - - x x -
SP11
P 1 x 3 x 5 6 x x 9
S 2 x - x 2 2 x x 2
SP12
P 2 x 2 x 2 1 x x x
S - x - x 6 6 x x x
1Processing Time 2Setup Time
f 1 2 3 4 5 6 7 8 9Cell1
SP1
P1 2 2 2 2 2 2 2 2 2
S2 1 1 1 1 1 1 1 1 1
SP2
P 2 2 2 2 2 2 2 2 2
S - - - - - - - - -
Cell2
SP3
P 2 2 x 2 2 2 2 x x
S - - x - - - - x x
SP4
P 2 2 x 2 2 2 2 x x
S - - x - - - - x x
Cell3
SP5
P 3 3 3 3 x x x 2 x
S 2 2 2 2 x x x 2 x
SP6
P 2 2 x x x x x x x
S 3 3 x x x x x x x
Cell4
SP7
P 1 x 3 x 5 6 x x 1
S 1 x - x 1 1 x x 2
SP8
P 2 x 2 x 2 1 x x x
S - x - x 1 1 x x x
1Processing Time 2Setup Time
GUI of the simulation tool developped
Processing and setup times for case study #2