Formation of low mass protostars and their circumstellar disks
A Distributed Simulation of P-Systems
1. A Distributed Simulation of P-Systems
A Syropoulos, EG Mamatas, PC Allilomes and KT Sotiriades
Research Division
Araneous Internet Services
Xanthi, Greece
E-mail: research@araneous.com
– p. 1/14
2. The core of our work
A simulation of P-systems.
– p. 2/14
3. The core of our work
A simulation of P-systems.
Simulation: Representation of the operation or features
of one process or system through the use of another.
– p. 2/14
4. The core of our work
A simulation of P-systems.
Simulation: Representation of the operation or features
of one process or system through the use of another.
We represent only P-systems that are members of the
family NOP2(coo, tar).
– p. 2/14
5. The core of our work
A simulation of P-systems.
Simulation: Representation of the operation or features
of one process or system through the use of another.
We represent only P-systems that are members of the
family NOP2(coo, tar).
And we used Java’s Remote Method Invocation for the
representation.
– p. 2/14
6. What are P-Systems?
An abstract model of computation that is inherently
parallel.
– p. 3/14
7. What are P-Systems?
An abstract model of computation that is inherently
parallel.
A foundation for distributed computing.
– p. 3/14
8. Tools for Distributed Programming
Two basic ways to implement a distributed algorithms:
A purely distributed platform or
Some network protocol to connect a number of
nodes that interchange data.
– p. 4/14
9. Tools for Distributed Programming
Two basic ways to implement a distributed algorithms:
A purely distributed platform or
Some network protocol to connect a number of
nodes that interchange data.
But. . . distributed operating systems (e.g., Plan-9) are
not widely available, in general.
– p. 4/14
10. Tools for Distributed Programming
Two basic ways to implement a distributed algorithms:
A purely distributed platform or
Some network protocol to connect a number of
nodes that interchange data.
But. . . distributed operating systems (e.g., Plan-9) are
not widely available, in general.
Fortunately, all modern general purpose operating
systems provide the necessary network capabilities
that can be utilized to create distributed applications.
– p. 4/14
11. Tools for Distributed Programming
Two basic ways to implement a distributed algorithms:
A purely distributed platform or
Some network protocol to connect a number of
nodes that interchange data.
But. . . distributed operating systems (e.g., Plan-9) are
not widely available, in general.
Fortunately, all modern general purpose operating
systems provide the necessary network capabilities
that can be utilized to create distributed applications.
– p. 4/14
12. Tools for Distributed Programming
Two basic ways to implement a distributed algorithms:
A purely distributed platform or
Some network protocol to connect a number of
nodes that interchange data.
But. . . distributed operating systems (e.g., Plan-9) are
not widely available, in general.
Fortunately, all modern general purpose operating
systems provide the necessary network capabilities
that can be utilized to create distributed applications.
– p. 4/14
14. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
– p. 5/14
15. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
Java’s Remote Method Invocation: A solution to our
problem!
– p. 5/14
16. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
Java’s Remote Method Invocation: A solution to our
problem!
– p. 5/14
17. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
Java’s Remote Method Invocation: A solution to our
problem!
– p. 5/14
18. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
Java’s Remote Method Invocation: A solution to our
problem!
– p. 5/14
19. Network Protocols
Distributed algorithms are implemented:
As a peer-to-peer or
A client-server architecture.
Sockets: pros and cons
The fundamental tool for the implementation of
TCP/IP networking applications.
Peer-to-peer applications require a (new) network
protocol for data exchange.
Java’s Remote Method Invocation: A solution to our
problem!
– p. 5/14
20. Java’s RMI in. . . detail
An object on one JVM can invoke methods on an
object in another JVM.
– p. 6/14
21. Java’s RMI in. . . detail
An object on one JVM can invoke methods on an
object in another JVM.
The arguments of the remote method are “marshalled”
and sent from the local JVM to the remote one, where
they are are “unmarshalled.”
– p. 6/14
22. Java’s RMI in. . . detail
An object on one JVM can invoke methods on an
object in another JVM.
The arguments of the remote method are “marshalled”
and sent from the local JVM to the remote one, where
they are are “unmarshalled.”
When the method terminates, the results are
marshalled from the remote machine and sent to the
caller’s JVM.
– p. 6/14
23. Java’s RMI in. . . detail
An object on one JVM can invoke methods on an
object in another JVM.
The arguments of the remote method are “marshalled”
and sent from the local JVM to the remote one, where
they are are “unmarshalled.”
When the method terminates, the results are
marshalled from the remote machine and sent to the
caller’s JVM.
If for some reasons an exception is raised, the
exception is indicated to the caller.
– p. 6/14
24. The simulation
It is implemented in Java and makes heavy use of the
Java’s RMI.
– p. 7/14
25. The simulation
It is implemented in Java and makes heavy use of the
Java’s RMI.
The system accepts an input file that describes a
P-system.
– p. 7/14
26. The simulation
It is implemented in Java and makes heavy use of the
Java’s RMI.
The system accepts an input file that describes a
P-system.
The simulation is distributed in the sense that a
number of objects execute code on different machines
while they communicate.
– p. 7/14
27. The simulation
It is implemented in Java and makes heavy use of the
Java’s RMI.
The system accepts an input file that describes a
P-system.
The simulation is distributed in the sense that a
number of objects execute code on different machines
while they communicate.
One object pretends to be the basic compartment,
while the others play the rôle of the internal
compartments.
– p. 7/14
28. The syntax of the language
system = “system” “is”
alphabet “and”
structure “and”
rules “and”
data “and”
output “and”
maximum “and”
“end”
alphabet = “[” letter { “,” letter } “]”
structure = “[” { “[” “]” } “]”
rules = “{” setOfRules { “,” setOfRules } “}”
– p. 8/14
29. The syntax of the language, cont.
setOfRules = “[” singleRule { “,” singleRule } “]”
singleRule = left “->” right
left = letter { letter }
right = replacement { replacement }
replacement = “(” letter [ “,” destination ] “)”
destination = “here” | “out” | in
in = “in” possitive-integer
data = “{” Mset { “,” Mset } “}”
– p. 9/14
30. The syntax of the language, cont.
Mset = “(” { occurance } “)”
occurance = “[” letter “,” possitive-integer “]”
output = “output” possitive-integer
maximum = “maximum” possitive-integer
– p. 10/14
31. Upon startup, all objects start sending multicast UDP
packets to a well-known multicast address.
– p. 11/14
32. Details of the simulation
Upon startup, all objects start sending multicast UDP
packets to a well-known multicast address.
Each packet contains the IP address of each sender.
– p. 11/14
33. Details of the simulation
Upon startup, all objects start sending multicast UDP
packets to a well-known multicast address.
Each packet contains the IP address of each sender.
Multicast packets are received by every object
participating in the “network.”
– p. 11/14
34. Details of the simulation
Upon startup, all objects start sending multicast UDP
packets to a well-known multicast address.
Each packet contains the IP address of each sender.
Multicast packets are received by every object
participating in the “network.”
The main object knows which objects are alive, so it
can decide whether the computation can start.
– p. 11/14
35. Details of the simulation, cont.
A universal clock is owned by the object that has the
rôle of the basic compartment.
– p. 12/14
36. Details of the simulation, cont.
A universal clock is owned by the object that has the
rôle of the basic compartment.
Communication breakdowns are considered as
exceptional situations and are treated accordingly.
– p. 12/14
37. Details of the simulation, cont.
A universal clock is owned by the object that has the
rôle of the basic compartment.
Communication breakdowns are considered as
exceptional situations and are treated accordingly.
Objects operate in parallel implementing the maximal
parallelism requirement for this “simple” case.
– p. 12/14
39. Gaining maximal parallelism
Initially, the simulator checks which rules are applicable
and selects them.
Applicable rules with common elements on their
left-hand side, are checked for the changes they cause
to the system.
The “weight” of each side of a rule is
equal to the number of elements, or 1 if
there are no elements. The total “weight”
of a rule is equal to the product of its two
“weights.”
Only one rule is selected!
– p. 13/14
40. Gaining maximal parallelism
Initially, the simulator checks which rules are applicable
and selects them.
Applicable rules with common elements on their
left-hand side, are checked for the changes they cause
to the system.
The “weight” of each side of a rule is
equal to the number of elements, or 1 if
there are no elements. The total “weight”
of a rule is equal to the product of its two
“weights.”
Only one rule is selected!
The remaining rules are used in the actual
computation.
– p. 13/14
42. Future work
Reimplement the system using the SOAP protocol.
Explore the foundational part of P-system.
– p. 14/14
43. Future work
Reimplement the system using the SOAP protocol.
Explore the foundational part of P-system.
Design and implementation of a distributed
programming language. . .
– p. 14/14