WSO2CON 2024 - IoT Needs CIAM: The Importance of Centralized IAM in a Growing...
The Uncertain Enterprise
1. The Uncertain Enterprise: Achieving Adaptation
through Digital Twins and Machine Learning
Tony Clark
Aston University, UK
tony.clark@aston.ac.uk
November 25, 2020
2. Overview
Enterprise System Project Failures
Digital Twins: New Approach for Design and Control
A Digital Twin Design Method
Research Challenges
Resources
3.
4.
5.
6.
7.
8.
9. Adaptation Through Digital Twins
First introduced in Grieves, M., Digital Twin:
Manufacturing excellence through virtual factory
replication. White paper (2002).
Working Definition:
An agent-based architecture where
each product item has a correspond-
ing virtual counterpart or agent as-
sociated with it.
Fr¨amling et al, 2003. Product agents for han-
dling information about physical objects.
Survey: Barricelli, B.R., Casiraghi, E. and Fogli, D., 2019. A Survey on Digital
Twin: Definitions, Characteristics, Applications, and Design Implications. IEEE
Access, 7
www.gartner.com/smarterwithgartner/
gartner-top-10-strategic-technology-trends-for-2019/
11. How to Develop Digital Twins for Uncertain Systems
Unlikely to have top-down behaviour.
Bottom-up information is partial.
Need a point-wise approach.
Unlikely to have good quality historical data.
Need to dynamically adapt.
Goals may change over time.
Environment may change over time.
12. Conceptual Model
Barat, S. Enterprise Digital Twin: An Approach to Construct Digital Twin for
Complex Enterprises. In Advanced Digital Architectures for Model-Driven Adaptive
Enterprises. IGI Global, 2020
16. Case Study
Orders arrive at the hub. Trucks
wait at the hub and are allocated
orders. Trucks may form platoons
before leaving to deliver the or-
der. Orders are required on spe-
cific dates, any earlier and the
product will perish, and later and
the customer is unhappy. Pla-
toons use less fuel than individual
trucks. There is a single route.
28. Agent Model: Invariant Example
Represent knowledge that never
changes about the domain as OCL
invariants. These will then need to
be encoded in the execution rules of
the machine.
Example: All the trucks are uniquely
allocated to the platoons.
1 context Platoon::trucks():[Truck] = Seq{head | tail}
2
3 context Hub inv: uniqueTrucks
4 platoons→forall(p1 p2 | p1.trucks()→intersection(p2.trucks())→isEmpty)
29. Agent Model: Behaviour Example
Represent behaviour as OCL pre and
post conditions for message handlers.
These will then need to be encoded
in the execution rules of the machine.
Example: Platoons can optionally
merge.
1 merged(Platoon(_,h,t+[hh]+tt),Platoon(_,h,t),Platoon(_,hh ,tt))
2 merged(Platoon(_,hh ,tt+[h]]+t),Platoon(_,h,t),Platoon(_,hh ,tt))
3
4 context Hub::tick ()
5 pre : true
6 post : platoons→select(p | !p.isMoving)→forall(p |
7 platoons@ pre→select(p | !p.isMoving)→forall(p1 p2 |
8 p = p1 or p = p2 or merged(p,p1 ,p2)))
Note that the specification of tick leaves open the choice of
whether to merge and by how much.
32. Machine Definition: Representation
Aim: produce an execution function over states. Define the goal
over sequences of states. Agent-model is non-deterministic
(stochastic). Translate the agent-model into a single machine.
33. Platoon Machine
(t, h, [ph, . . .], [pm, . . .], [o, . . .]) where t is the current time, h is a special location
called the Hub, p is a platoon of the form (˜v, l) being a sequence of trucks and a
current location. ph are the platoons in the hub and pm are the platoons on the
move. A truck v is a pair (f, o) containing fuel usage and the order. An order
is (d, l) containing delivery time and target location. A location is (n, l) with a
name an the next location.
Order: (t,h,˜p+[([( ,⊥)],h)]+˜p ,˜pm,(d,l):˜o) −→(t,h,˜p+[([(0,(d,l))],h)]+˜p ,˜pm,˜o)
Return: (t,h,˜ph,˜pm+[(˜v,h)]+˜pm,˜o) −→(t,h,˜ph+[([v],h) | v∈˜v],˜pm+˜pm,˜o)
Create: (t,h,˜ph+[(˜v,h)]+˜ph+[(˜v ,h)+˜ph,˜pm,˜o) −→(t,h,˜ph+[(˜v+˜v ,h)]+˜ph+˜ph,˜o)
(t,h,˜ph+[(˜v,h)]+˜ph+[(˜v ,h)+˜ph,˜pm,˜o) −→(t,h,˜ph+[(˜v +˜v,h)]+˜ph+˜ph,˜o)
Starting: (t,h,˜ph+˜ph+˜ph,˜pm,˜o) −→(t,h,˜ph+˜ph,˜ph+˜pm,˜o)
Moving: (t,h,˜ph,˜pm,˜o) −→(t+1,h,˜ph,move(t,˜pm),˜o)
move(t, ˜p) changes the location in an order to ⊥ when it arrives, increases the
fuel usage, decreases the order time, and advances the location for each hub.
36. Goal Definition
We now have a precise description of computation: sequences of
machine states. It is possible to define goals. Here is a projection
on on aspect of goals. When delivering to locations, if no platoons
are moving then we are OK:
( , , , [], )
We are also OK when any truck delivers on time:
( , , , [([( , (d, l)], l )], ) such that l = l ⇒ d = 0
This must apply throughout the structure:
( , , , ˜pm, )
( , , , ˜pm, )
( , , , ˜pm + ˜pm, )
( , , , [(˜v, l)], )
( , , , [(˜v , l)], )
( , , , [(˜v + ˜v , l)], )
42. ESL Agents
How can OO Languages accommodate Reinforcement Learning?
agent [best] name(args)::Agent[S,M,A] extends parent {
...learning parameters...
...local values and operations...
states::[S] = ...
messages::[M] = ...
actions::[A] = ...
init ()::S = ...
terminalState(history::[S])::Bool = ...
reward(history::[S]):: Float = ...
// The non -deterministic state transitions...
pi
m, pi
s when b → {
ai
1 : ei
1
| ai
2 : ei
2
| ...
}
}
http://www.esl-lang.org
44. A Counter in ESL
1 data Move;
2 data Actions = Inc | Dec | Noop;
3
4 agent [-] counter(limit:: I n t )::Agent[ Int ,Move, Actions ] {
5 explorationFactor::Float = 0.9;
6 explorationDecay::Float = 0.9995;
7 states::[ I n t ] = -1.. limit +1;
8 messages::[Move] = [Move];
9 actions::[ Actions ] = [ Inc ,Dec,Noop];
10 init ():: I n t = 0;
11 terminalState ([][ I n t ]::[ I n t ])::Bool = false;
12 terminalState (n:s::[ I n t ])::Bool = n = limit;
13 reward(s:ss::[ I n t ])::Float = length[ I n t ](ss); when s = limit
14 reward(ss::[ I n t ])::Float = 40.0;
15 Move,n when n < 0 → { Inc : n + 1 }
16 Move,n when n > limit → { Dec: n - 1}
17 Move,n when n < limit → { Inc : n + 1 | Noop: n | Dec: n - 1 }
18 Move,n when n = limit → { Noop: n }
19 }
20
21 counter (10);
22 counter (15);
23 counter (20);
52. The Uncertain Enterprise: Challenges
Validation: Check you have the right simulation.
Verification: Checking the simulation is right.
Modelling Expertise: agents, machines, levers, goals, rewards,
NN.
Ethical Considerations: trust, values.
Intelligence, collaboration, conflict, game theory.
Efficiency: training, distributed AI and ML.
Explainability, traceability.
Unknown unknowns.