!Chaos seminary 12_2012

3,859 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
3,859
On SlideShare
0
From Embeds
0
Number of Embeds
51
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

!Chaos seminary 12_2012

  1. 1. Control system based on a Highly Abstracted and Open Structure seminar 2012 http://chaos.infn.it C.Thursday, January 10, 13
  2. 2. What is !CHAOS (Control system based on Highly Abstracted and Open Structure), is an experiment of CSN 5(technological research experiments) at the LNF and Roma-TV section. The project has been appreciated by educational world and the University of Università TorVergata LNF - Roma2 Università di CagliariThursday, January 10, 13
  3. 3. Objective !CHAOS has been designed to widely scale and for manage large apparatus with different needs in term of storage technology and speed for data acquisitionThursday, January 10, 13
  4. 4. Objective new topology in the CS that permit to integrate the triggered DAQ. define a new way to access realtime data data acquisition, storage and indexing fully scalableThursday, January 10, 13
  5. 5. Objective A CPP Framework and services that permit to have: • 99% of adaptation time of instrument to !CHOAS done by framework • simplify the creation of the user interfaces • simplify the creation of the control algorithms basing it on an abstraction of the I/O channels or full dataset not on physical deviceThursday, January 10, 13
  6. 6. !CHAOS Simplify VisionThursday, January 10, 13
  7. 7. Simplify Vision User specialized Boundary Implementation CHAOS Boundary Controls Algorithm Implementation Implementation Instruments network User Interface Driver Live Data Directory Data Cloud Server Cloud History Data ImplementationThursday, January 10, 13
  8. 8. Simplify Vision User specialized Boundary Implementation CHAOS Boundary Controls Algorithm Abstract RPC Driver Implementation Implementation Instruments Abstract network User Interface Driver Event Driver Abstract Direct I/O Live Data Directory Data Cloud Server Cloud History Data ImplementationThursday, January 10, 13
  9. 9. Simplify Vision User specialized Boundary Implementation CHAOS Boundary Controls Algorithm Abstract RPC Driver Implementation Implementation Instruments Abstract network User Interface Driver Event Driver Abstract Direct I/O Live Data Live Data Live Data Directory Data Cloud Data Cloud Server Data Cloud Cloud History History Data Data History Data Implementation Implementation ImplementationThursday, January 10, 13
  10. 10. !CHAOS Real ModelThursday, January 10, 13
  11. 11. User User Interface for monitoring and controls Customization Abstraction Boundary User Interface Toolkit Common Toolkit RPC - Event - Direct I/O Data Boundary CQL Proxy Nodes Informatio Execution Unit Toolkit n RPC - Event - Direct I/O Indexer Nodes Node Common Toolkit Management Nodes Information HTTP - API Live / History Security domainControls NodeAlgorithm Activity Chaos Live / History Directory domain Nodes RPC - Event - Direct I/O Common Toolkit Control Unit Toolkit Driver for sensor and instrumentsThursday, January 10, 13
  12. 12. Toolkit and Services !CHAOS is a set of toolkit and services that will permit the realization of an control systemThursday, January 10, 13
  13. 13. !CHAOS Toolkit and Services Common Toolkit Directory Services CU Toolkit Live and History Data UI Toolkit CQL Proxy EU Toolkit Indexer ManagmentThursday, January 10, 13
  14. 14. !CHAOS ToolkitThursday, January 10, 13
  15. 15. !CHAOS Toolkit Common implements the fundamental softwares layer to RPC, I/O, Event and other’s utility Toolkit code and for logic abstracts the !CHAOS resources to the CU Toolkit device drivers developers. abstracts the CHAOS resources to the UI Toolkit developers that need to develop control and monitoring user interface abstracts the CHAOS resources to the EU Toolkit developers that need to develop control and computing algorithmsThursday, January 10, 13
  16. 16. !CHAOS ServicesThursday, January 10, 13
  17. 17. !CHAOS Services Directory gives services via RPC (or with other higher transport layers) for get device Services structured information, system topology give the access to insert and query CQL Proxy operation on live and history data to other CHOAS node using CHOAS Query performs the indexing of the new Indexer inserted data performs the management of the datas Management chunk saved on the storage (erase, move,Thursday, January 10, 13
  18. 18. !CHOAS Scalability and Fault- Some node needs to be scaled for performance reason or for fault-tolerant: • Directory Nodes • Live and History nodesThursday, January 10, 13
  19. 19. !CHOAS Scalability and Fault- Server P1 Server P2 Framework Device 1 Driver X X Server P1 Server PN X Server P1 Server P2 Client Reader Server P2 Driver Server PN Framework Clients that need to talk to scaled services, has more than one ip address for every serviceThursday, January 10, 13
  20. 20. !CHOAS Scalability and Fault- Server P1 Server P2 Framework Device 1 Driver X X Server P1 Server PN X Server P1 Server P2 Client Reader Server P2 Driver Server PN Framework Every ip address has a priority, the client, as first address, uses the one with highest priorityThursday, January 10, 13
  21. 21. !CHOAS Scalability and Fault- Server P1 Server P2 Framework Device 1 Driver X X Server P1 Server PN X Server P1 Server P2 Client Reader Server P2 Driver Server PN Framework when a server goes down, another ip will be usedThursday, January 10, 13
  22. 22. !CHOAS Scalability and Fault- Server P1 Server P2 Framework use at ok Device 1 Server P1 Driver TS1 Server PN use at Server P1 TS1 Server P2 Client Reader Server P2 Driver Server PN Framework the re-balance occurs when a server with a highest priority, previously goes down or stopped for maintenance, becomes up and runningThursday, January 10, 13
  23. 23. !CHOAS Scalability and Fault- Server P1 Server P2 Framework use at ok Device 1 Server P1 Driver TS1 Server PN use at Server P1 TS1 Server P2 Client Reader Server P2 Driver Server PN Framework the new server will be used starting from a pre-determinated time stamp (TS1)Thursday, January 10, 13
  24. 24. !CHOAS Scalability and Fault- Server P1 Server P2 Framework use at ok Device 1 Server P1 Driver TS1 Server PN use at Server P1 TS1 Server P2 Client Reader Server P2 Driver Server PN Framework all the operations and the calculation of TS1 are coordinated by one of the directory nodesThursday, January 10, 13
  25. 25. !CHOAS Communication 11/dic/2011 1Thursday, January 10, 13
  26. 26. Communication Systems !CHOAS has three different communication system: RPC, used for command (messages that need to be securely received and recorded and for message that need a response [ asynchronous ]) Events, used for inform to a wide range of nodes, that something has happened (device heartbeat, alert on device channel, received command, etc..)Thursday, January 10, 13
  27. 27. Communication Systems Every !CHAOS node has at last one client and server instance of any of the three systemThursday, January 10, 13
  28. 28. RPC System•RPC layer is implemented as a plug-ins System, its is abstracted to internal CHAOS framework so we con change it• used to send important messages that need to be sure of its delivering status• can be used to receive asynchronous responses• the set command for input channel on control unit is done with RPC 09/dic/2011Thursday, January 10, 13
  29. 29. RPC System Message Answer [optional] Messagge forwarding Response forwarding Node A Node B Node A Node B ack [sync] ack [sync]Thursday, January 10, 13
  30. 30. RPC System RPC Message Flow • request message is constructed with BSON, serialized and sent to RPC end point • if the requester want a response it put answer-code and sender-address in BSON pack,and automatically RPC System wait for the answer • the request and response involves the forwarding of the ack message, from the receiver to the sender in Message Answer [optional] Messagge forwarding Response forwarding Node A Node B Node A Node B ack [sync] ack [sync]Thursday, January 10, 13
  31. 31. Event System • Event is implemented as a plug-ins System its is abstracted to internal CHAOS framework so we con change it • there are four type of event: • Alert (highest priority) • Instrument • Command • Custom • all information are packed into 1024 byte and transported by protocols that support multicast (UDP, PGM[http:// en.wikipedia.org/wiki/Pragmatic_General_Multicast]) 09/dic/2011Thursday, January 10, 13
  32. 32. Event System Some numbers about event data • Every event type has 2^16 sub type • Every sub type can have up to 2^16 priority level • Every subtype can have up to 255 byte of information 09/dic/2011Thursday, January 10, 13
  33. 33. Direct I/O System • Direct I/O is implemented as a plug-ins system • used to make insert or select into the live and the history data • can be used as stream for receive result data of the query or read the stored raw data file of the devices 09/dic/2011Thursday, January 10, 13
  34. 34. !CHAOS Abstraction of the 11/dic/2011 2Thursday, January 10, 13
  35. 35. Abstraction !CHAOS use a structured serialization for data description and serialization • BSON (http://bsonspec.org/) has been chosen, very similar to JSON but it is binary • serialization is wrapped into a CPP class so it can be changed infuture • it is used in: •Hardware dataset descriptions and acquired data packaging (“channel desc” = value) •RPC message •Chaos Query LanguageThursday, January 10, 13
  36. 36. Hardware Dataset •Hardware attributes and I/O channels are described into the DATASET •Each attribute is defined by •Name •Description •Type (kind+basic type ex: VOLT32, CUSTOM_STR, etc...) •Cardinality (single or array) •Flow (Input, Output, Bidirectional) •Range 11/dic/2011Thursday, January 10, 13
  37. 37. Hardware Dataset DATASET name:VOLT_CHANNEL_1 type:VOLT32 flow:output range:1-20 card: 1 VOLT_CHANNEL_ name:VOLT_CHANNEL_2 RESET type:VOLT32 flow:output range:1-20 VOLT_CHANNEL_ card: 1 name: RESET type: BYTE flow: input card: 1 11/dic/2011Thursday, January 10, 13
  38. 38. Hardware DATASET The JSON visualization of BSON DS representation {“device_id”: DEVICE_ID VOLT_CHANNEL_ “device_ds”: { "name" : "VOLT_CHANNEL_1", RESET "desc" : "Output volt...", "attr_dir" : 1, "type" : 4, "cardinality" : 1 VOLT_CHANNEL_ ! } .... } 11/dic/2011 2Thursday, January 10, 13
  39. 39. !CHAOS Unit toolkit 11/dic/2011 3Thursday, January 10, 13
  40. 40. CU Toolkit CUToolkit help the Control Unit development Developer need to extend only one class to create a CHAOS driver, the “AbstractControlUnit” AbstractControlUnit expose all the APIs needed for interacting with !CHOAS 11/dic/2011Thursday, January 10, 13
  41. 41. Control Unit AbstractControlUnit is an abstract cpp class, that force developer to implement some method Control Unit Toolkit Common Toolkit 11/dic/2011Thursday, January 10, 13
  42. 42. Control Unit AbstractControlUnit is an abstract cpp class, that force developer to implement some method AbstractControlUnit Control Unit Toolkit •defineActionAndData Common Toolkit set •init •run •stop •deinit 11/dic/2011Thursday, January 10, 13
  43. 43. CUToolkit feature multi-threading messaging dispatcher; every CU has it’s own thread for dispatching messages after it has been received by RPC System cpp method: setDatasetAttribute CommonToolk Control Unit Toolkit Common BSON Toolkit PACK sent fromThursday, January 10, 13
  44. 44. CUToolkit feature multi-threading messaging dispatcher; every CU has it’s own thread for dispatching messages after it has been received by RPC System cpp method: setDatasetAttribute CommonToolk Control Unit Toolkit Common BSON Toolkit PACK the RPC dispatcher find destination sent fromThursday, January 10, 13
  45. 45. CUToolkit feature multi-threading messaging dispatcher; every CU has it’s own thread for dispatching messages after it has been received by RPC System cpp method: setDatasetAttribute CommonToolk CU Message Queue is filled Control Unit Toolkit Common BSON Toolkit PACK the RPC dispatcher find destination sent fromThursday, January 10, 13
  46. 46. CUToolkit feature multi-threading messaging dispatcher; every CU has it’s own thread for dispatching messages after it has been received by RPC System cpp method: setDatasetAttribute CommonToolk AbstractControlU CU Message Queue is filled Control Unit Toolkit BSON Common BSON PACK Toolkit PACK the RPC dispatcher find destination Pack is sent to CUToolkit BSON scheduler sent PACK from BSON PACKThursday, January 10, 13
  47. 47. !CHAOS UI Toollkit 11/dic/2011 3Thursday, January 10, 13
  48. 48. CU Toolkit UIToolkit helps the user interface development Gives API that help to develop: monitoring processes, device controller etc... Gives an high performance queue using "locking free" design. The queue can manage quantizations , over time, for the dataset data acquisition UIToolkit DeviceControl DeviceControl InterThread Cache Logic Live Acquisition Thread Pool AbsractIO AbsractIO Driver Driver NetworkThursday, January 10, 13
  49. 49. !CHAOS EU Toolkit 11/dic/2011 3Thursday, January 10, 13
  50. 50. EU Toolkit helps the development of distributed algorithms (Execution Unit) that can be used to compute or automate the devices control The EU can have, in input the device channels or all the datasset description, so it work on a "Class" of device not directly on one deviceThursday, January 10, 13
  51. 51. EU Toolkit scheme of flow of input or output of an EU Specialized Execution Unit Execution Abstract Execution Unit Result Queue Queue Input Parameter Output Parameter Input Param Pack Output Result PackThursday, January 10, 13
  52. 52. EU Toolkit exception could be concatenated creating a sequence of any combination of: • Computing algorithm • Control algorithm Specialized Execution Unit Execution Abstract Execution Unit Result Queue Queue Input Parameter Output Parameter Input Param Pack Output Result Pack Input Param Pack Specialized Execution Unit Execution Abstract Execution Unit Result Queue Queue Input Parameter Output ParameterThursday, January 10, 13
  53. 53. !CHAOS Data Management 11/dic/2011 4Thursday, January 10, 13
  54. 54. Data Management !CHOAS has two different management system for the data apparatus controlled environment data Live Data History DataThursday, January 10, 13
  55. 55. !CHAOS Live Data ui_1 push data live dev_1 ui_2 cache eu_1 11/dic/2011 4Thursday, January 10, 13
  56. 56. Live Data The system of the Live Data, is ideated for the sharing of the real time state of the instruments (CU) to other nodesThursday, January 10, 13
  57. 57. Live Data Every instrument pushes the data of his I/O channels into the shared cached through the controller CU ui_1 push data live dev_1 ui_2 cache eu_1Thursday, January 10, 13
  58. 58. Live Data 1. every node can read data from the central cache at any time[polling] 2. the nodes can register to central cache for receive update[push] ui_1 push data live dev_1 ui_2 cache eu_1Thursday, January 10, 13
  59. 59. Live Data scalability & reliability ui_1 push data live dev_1 All data of all instruments are cache ui_2 distributed across many cache server push data live dev_2 cache eu_1Thursday, January 10, 13
  60. 60. Live Data scalability & reliability X ui_1 with the algorithm previously dev_1 live cache shown, when one server goes push data ui_2 down, all client that pushs, or push data read, data on that server, use live another server, with highest dev_2 cache eu_1 priorityThursday, January 10, 13
  61. 61. Live Data The usage of the two modes permits to manage, more efficiently, the bandwidth of the flows of the real time data (read and write) not all nodes need realtime data at the same speed: • the mode (1) is for monitoring and control • the mode (2) is for control algorithm that can’t lose packagesThursday, January 10, 13
  62. 62. !CHAOS History Data 11/dic/2011 4Thursday, January 10, 13
  63. 63. Live & History Data ChaosQL Proxy CQL Interpreter Abstract History Logic Cache Logic Common Layer Storage Driver Cache Driver Index Driver the live data management is merged into history data management into the ChaosQL node with the Cache DriverThursday, January 10, 13
  64. 64. History Data The System for History Data has been designed to efficiently store the serialized data pack, manage different type of index on demand, and managing data fileThursday, January 10, 13
  65. 65. History Data It is realized by three different services and two ChaosQL Indexer Management Storage Management Logic Management Index Logic Abstract History Logic Abstract History Logic Abstract History Logic Storage Driver Storage Driver Storage Driver CQL Interpreter Index Driver Index Driver Index Driver Common Layer Common Layer Common Layer Storage Driver Index Driver Storage Driver Index DriverThursday, January 10, 13
  66. 66. Chaos Query Language ChaosQL implement the following operation on data • push history/live data per device • create index on device::attribute::{rule} • delete index on device::attribute • retrieve data with logic operation on attribute and indexThursday, January 10, 13
  67. 67. History Data Manages the insert and select operations by parsing the Chaos ChaosQL Query Language Abstract History Logic Storage Driver CQL Interpreter Index Driver Insert - as fast as possible stores Common Layer the data into the storage system select - decodes the query, asks at the indexer the position in the storage of the data that match theThursday, January 10, 13
  68. 68. History Data UI or CU or EU UI-CU-EU Toolkit Chaos Common All nodes, using the ChaosQL Driver CQL I/O Driver Direct I/O Driver (that uses direct I/O) can query the history and live data. Query Result Proxy CQL Interpreter the results are received in Abstract History Logic asynchronous mode Common Layer Storage Driver Index DriverThursday, January 10, 13
  69. 69. Storage oxy Q L Pr C Managemen t Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP CQ Index 11/dic/2011Thursday, January 10, 13
  70. 70. Storage oxy Q L Pr C Managemen t Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP Push CQ Data Index 11/dic/2011Thursday, January 10, 13
  71. 71. Storage Store Data oxy Q L Pr C Managemen t Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP Push CQ Data Index 11/dic/2011Thursday, January 10, 13
  72. 72. Storage Store Data oxy Q L Pr C Launch Managemen Index t Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP Push CQ Data Index 11/dic/2011Thursday, January 10, 13
  73. 73. Storage Store Data get data pack oxy Q L Pr C Launch Managemen Index t Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP Push CQ Data Index 11/dic/2011Thursday, January 10, 13
  74. 74. Storage Store Data get data pack oxy Q L Pr C Launch Managemen Index Store t Index Indexer CUTK xy Cluster L Pro CommonTK CQ xy ro LP Push CQ Data Index 11/dic/2011Thursday, January 10, 13
  75. 75. Managemen t Storage CUTK Indexer CommonTK Cluster oxy Q L Pr C xy L Pro CQ xy ro LP CQ Index 11/dic/2011Thursday, January 10, 13
  76. 76. Managemen t Storage CUTK Indexer CommonTK Cluster oxy Q L Pr C xy L Pro CQ EUTK xy CommonTK ro LP CQ Index UITK CommonTK 11/dic/2011Thursday, January 10, 13
  77. 77. Managemen t Storage CUTK Indexer CommonTK Cluster oxy Q L Pr C xy Pro Query Live CQ L Data EUTK xy CommonTK ro LP CQ Index UITK CommonTK 11/dic/2011Thursday, January 10, 13
  78. 78. Managemen t Storage CUTK Indexer CommonTK Cluster oxy Q L Pr C xy Pro Query Live CQ L Data EUTK xy CommonTK ro LP CQ Index UITK Query CommonTK History Data 11/dic/2011Thursday, January 10, 13
  79. 79. Managemen t Storage CUTK Indexer CommonTK Cluster oxy Q L Pr C execute search on idx DB xy Pro Query Live CQ L send Data search on EUTK idx xy CommonTK ro LP CQ Index UITK Query CommonTK History Data 11/dic/2011Thursday, January 10, 13
  80. 80. Managemen t Storage get pack CUTK at location Indexer CommonTK Cluster oxy Q L Pr C execute search on idx DB xy Pro Query Live CQ L send Data search on EUTK idx xy CommonTK ro LP CQ Index UITK Query CommonTK History Data 11/dic/2011Thursday, January 10, 13
  81. 81. Managemen t Storage get pack CUTK at location Indexer CommonTK Cluster oxy Q L Pr C execute search on idx DB xy Pro Query Live CQ L send Data search on EUTK idx Receive History xy CommonTK ro LP Data CQ Index UITK Query CommonTK History Data 11/dic/2011Thursday, January 10, 13
  82. 82. History Data UI or CU or EU the query can be splitted in more than UI-CU-EU Toolkit QueryChannel one time period from the receiving proxy. Chaos Common The splitted query are sent to other proxy CQL I/O Driver Direct I/O Driver CQLPack that will process the sub query Original Re ry lt Result sul ue su t Re Q Proxy CQL Interpreter Proxy CQL Interpreter Proxy CQL Interpreter Abstract History Logic Abstract History Logic Abstract History Logic Common Common Storage Storage Common Driver Driver Driver Driver Layer Layer Index Index Storage Driver Driver Layer Index CQLPack CQLPack CQLPack 1 2 3 Sub QueryThursday, January 10, 13
  83. 83. History Data Executes the necessary operations Indexer to index the data saved on the storage system Management Index Logic Abstract History Logic Storage Driver Index Driver two type of index: • TMI - Time Machine index Common Layer • DSI - Dataset IndexThursday, January 10, 13
  84. 84. History Data TMI - Time Machine index •This index permits to catalog every data pack for a device for the timestamp. • all the data pack are recorded and associated to their timestamp • this index is managed by systemThursday, January 10, 13
  85. 85. History Data DSI - Dataset Index •This index permits to catalog attributes of the dataset for a determinated instrument • there will be possibility of customize the index algorithm • this index is created on demand and on specific time periodsThursday, January 10, 13
  86. 86. History Data Executes management operations Management on files that store the data packs Storage Management Logic Abstract History Logic Storage Driver Index Driver some data pack could be subject to Common Layer ageing the ageing state will determine theThursday, January 10, 13
  87. 87. Project Status C User Interface LV User Interface MatLab UI User Interface Toolkit 20% Common Toolkit distributed distributed Directory Live Data History server infrastructure service Data service Common Toolkit CU Toolkit Control Unit Control Unit I/O Unit Exec Unit CU-LV interface CU modules (config) CU modules (init) CU modules (set) CU modules (de-init) CU modules (stop) I/O modules I/O modules I/O modules EU modules EU modules EU modules CU modules (config) CU modules (init) CU modules (set) CU modules (de-init) CU modules (stop) INFN-Vitrociset meeting 23/02/12Thursday, January 10, 13
  88. 88. developments’ details QT User Interface: prototype ready (Qt) UI Toolkit: prototype ready C wrapper UI Toolkit: prototype ready LV User Interface: prototype in preparation (shlib for LV) Live Data: single-instance (no-distributed) prototype ready RPC client/server: prototype ready (msgpack and zmq impl) Event client/server: prototype ready (boost asio impl) infrastructure Direct I/O c/s: todo History Data service: performance test (mongodb) ongoing (UniTV) Directory services: DS simulator ready; DS under development scalability & availability: test facility in preparation Common Toolkit: prototype ready CU Toolkit: prototype ready CU modules: prototype ready (C++) CU-LV Interface prototypes & modules: almost ready INFN-Vitrociset meeting 23/02/12Thursday, January 10, 13
  89. 89. ...closing in the next year all development force will be focused to realize the system for the history data and for the directory service, a prototype for each system will be available for 2013 2QThursday, January 10, 13
  90. 90. CHOAS Group Claudio Bisegni, Luciano Catani, Giampiero Di Pirro, Luca Foggetta, Mazziteli Giovanni, Matteo Mara, Antonello Paoletti, Stecchi Alessandro, Federico Zani thesiss Flaminio Antonucci (tor Vergata Ingengeria), Andrea Capozzi (Tor Vergata Ingengeria), Francesco Iesu (Cagliari Infromatica), thanks for the timeThursday, January 10, 13

×