Distributed Deployment                                                                  version 1.0                       ...
Table Of Contents1    Introduction ..........................................................................................
Figure 1: Default Single-Task System Topology .................................................. 3Figure 2: Example Two-Ta...
Technical Note: Distributed Deployment1 IntroductionThe PI-MDD method for constructing software for complex and high perfo...
Technical Note: Distributed DeploymentIncident – An instance of a modeled Signal or of an IncidentHandle (callback) is an ...
Technical Note: Distributed Deployment2.2 Configurations and Connections2.2.1 Single TaskFrom an application perspective, ...
Technical Note: Distributed DeploymentThis internal Task structure is common for all Modeled Tasks. For simplicity from he...
Technical Note: Distributed DeploymentIn the topology shown below there is a single instance of ProcessType MAIN and a sin...
Technical Note: Distributed DeploymentThis internal Process structure is common for all Processes. For simplicity from her...
Technical Note: Distributed DeploymentDesigner on good joint footing to develop and effective deployment. But at some poin...
Technical Note: Distributed Deployment3.1 Structural Design ElementsThe construction, interconnection and deployment of mu...
Technical Note: Distributed DeploymentA TaskID can be set to any identifier name, however by convention all fixed task ids...
Technical Note: Distributed Deployment    SYS_TASK_PRIORITY_LOWER    SYS_TASK_PRIORITY_LOWESTModeled TasksAny model elem...
Technical Note: Distributed DeploymentThe code also automatically starts the PfdProcess’s realized communication tasks to ...
Technical Note: Distributed Deployment                   Figure 9: Example Multi-Task, Multi-Processor TopologyTo specify ...
Technical Note: Distributed Deployment3.3.1 Single Process Single DeploymentOnce an executable is built for each ProcessTy...
Technical Note: Distributed Deployment             Process ID    ProcessType      IP address      TCP port                ...
Technical Note: Distributed Deployment3.3.3.1 Additional Fields – UDP Port and Spotlight PortTo support external realized ...
Technical Note: Distributed Deploymentby that service. An asynchronous mechanism, IncidentHandles cannot be constructed fo...
Technical Note: Distributed Deploymentare sent, the connection is left up – in a usable state. Later as other messages are...
Technical Note: Distributed Deploymentordering. In an effort to ensure consistency of communications and readability of al...
Technical Note: Distributed Deploymenttypedef void* (*sw_deserialize_function_t)(message_buffer_t *msg_buf, int   *msg_len...
Technical Note: Distributed DeploymentSee error_codes.hpp for the most complete list of pre-defined error codes.4.6 Genera...
Technical Note: Distributed Deployment                                         21
Technical Note: Distributed DeploymentA. Thread Proliferation in Hand Written Systems     Software development for complex...
Technical Note: Distributed Deployment     are also applied to manage priority so a few, higher priority elements are allo...
Technical Note: Distributed Deployment               B. Marking Summary       marking name                applies to      ...
Technical Note: Distributed Deployment          C. Compiler Symbol Summary                 symbol name                    ...
Upcoming SlideShare
Loading in...5
×

Distributed Deployment Model Driven Development

344

Published on

The PI-MDD method for constructing software for complex and high performance systems
separates the complexities of the problem space subject matters from the strategies and
details of the implementation platforms the system executes within.

In PI-MDD one of the most fundamental disciplines pushes nearly all aspects of this complexity from the modeling space. So how is this clearly important, nearly ubiquitous, and non-trivial concern addressed with PI-MDD and PathMATE? The distributed deployment of PI-MDD models to multiple execution units is managed with an integrated set of model markings (properties), specialized code generation rules/templates and a flexible set of implementation mechanisms. The PIMDD models remain independent of this target environment topology, and can be deployed to
a range of topologies to support unit testing, system testing, and alternative deployment architectures.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
344
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Distributed Deployment Model Driven Development

  1. 1. Distributed Deployment version 1.0 12/13/11PathMATE Technical Notes Pathfinder Solutions Wrentham, MA, USA www.pathfindermda.com +1 508-568-0068 copyright 1995-2011 Pathfinder Solutions LLC, all rights reserved
  2. 2. Table Of Contents1 Introduction .......................................................................................... 12 Overview............................................................................................... 1 2.1 Design Elements............................................................................ 1 2.1.1 Task...................................................................................... 1 2.1.2 Process.................................................................................. 2 2.1.3 Processor............................................................................... 2 2.2 Configurations and Connections ....................................................... 3 2.3 Modeling in the Multi-Process World ................................................. 6 2.3.1 NodeManagement ................................................................... 7 2.3.2 Routing Parameters ................................................................. 7 2.3.3 Intertask Contention and the PfdSyncList ................................... 73 Deploying to Distributed Topologies...................................................... 7 3.1 Structural Design Elements ............................................................. 8 3.2 Topology Identification and Allocation ............................................... 8 3.2.1 Task Identification................................................................... 8 3.2.2 Task Priorities......................................................................... 9 3.2.3 ProcessType Definition ............................................................10 3.3 Topology Definition – the Process Configuration Table ........................12 3.3.1 Single Process Single Deployment ............................................13 3.3.2 Single Process Multi Deployment (SPMD) ...................................13 3.3.3 Process Configuration Table .....................................................134 Messaging and Connectivity ................................................................ 15 4.1 Elements .....................................................................................15 4.2 Interprocessor and Endianness .......................................................16 4.2.1 Build-in Type Data Item Serialization ........................................18 4.2.2 User-Defined Type Serialization – Advanced Realized Types .........18 4.3 Communication Errors ...................................................................19 4.3.1 Multi-Task.............................................................................19 4.3.2 Model-Level Interface – SW:RegisterForErrorGroup.....................19 4.4 Generated Topology Summary Report .............................................20 4.5 Trace Debugging ..........................................................................20A. Thread Proliferation in Hand Written Systems .................................... 22 PI-MDD, Asynchronous Domain Interactions, and The Event-Driven Paradigm.22 Table Of Figures ii
  3. 3. Figure 1: Default Single-Task System Topology .................................................. 3Figure 2: Example Two-Task System Topology ................................................... 3Figure 3: Simplified Task Symbology ................................................................ 4Figure 4: Multi-Task, Multi-Processor Topology ................................................... 5Figure 5: Simplified Process Symbology ............................................................ 6Figure 6: SPMD Deployment ............................................................................ 6Figure 7: Default System Topology ................................................................... 8Figure 8: One Process Two-Task System Topology .............................................. 9Figure 9: Example Multi-Task, Multi-Processor Topology .....................................12Figure 10: Example SPMD Deployment.............................................................14 iii
  4. 4. Technical Note: Distributed Deployment1 IntroductionThe PI-MDD method for constructing software for complex and high performance systemsseparates the complexities of the problem space subject matters from the strategies anddetails of the implementation platforms the system executes within. While programminglanguage and target operating system are key aspects of this implementation platform, theyrepresent one-time selections, often made by organizational inertia. Perhaps the mostdemanding facet of this separation discipline is that the most complex and creative aspects ofplatform revolve around deploying a complex software system across a distributed executionunit topology with a range of processors, processes and tasks.Modern software systems at nearly all levels of complexity execute with some degree ofparallel execution. Even the simplest systems apply separate threads of control to manage thelowest level of communication input and output. Conceptually coherent “single” systems ofteninvolve highly synchronized activities executing on separate processors. In PI-MDD one of themost fundamental disciplines pushes nearly all aspects of this complexity from the modelingspace. So how is this clearly important, nearly ubiquitous, and non-trivial concern addressedwith PI-MDD and PathMATE? The distributed deployment of PI-MDD models to multipleexecution units is managed with an integrated set of model markings (properties), specializedcode generation rules/templates and a flexible set of implementation mechanisms. The PI-MDD models remain independent of this target environment topology, and can be deployed toa range of topologies to support unit testing, system testing, and alternative deploymentarchitectures.2 Overview2.1 Design Elements2.1.1 TaskTask – a separate thread of execution control that can run interleaved with or in parallel withother tasks in the same Process. Tasks within the same Process can address the samememory space. Tasks are supported by specific OS-level mechanisms. Even on “bare boards”without a multi-tasking OS, there is a main Task, and interrupt vectors are considered to run ina second Interrupt Task.Modeled Task – a Task that was explicitly identified as part of the deployment of modelelements through the TaskID marking. Modeled tasks are managed automatically byPathMATE mechanisms, including start, stop, nonlocalDispatchers, inter-task incident queuing,and task-local memory pools. One instance of the PfdTask class (SW_Task in C) manageseach Modeled Task that is executing modeled elements.Each Modeled Task is designated to run on a specific ProcessType.Note: When a realized domain is marked to run in a specific task, the resulting Task is still aModeled Task because it was created from marking a model element – the realized domain,and have all the conveniences of modeled domains in this regard. For the purposes of thisdocument, a reference to a Task from here forward will mean a Modeled Task unless explicitlydesignated otherwise. 1
  5. 5. Technical Note: Distributed DeploymentIncident – An instance of a modeled Signal or of an IncidentHandle (callback) is an Incident.Each Incident is designated to run within a specific Task, by the Task allocation of either itsdestination class or target service/operation.NonlocalDispatcher – For each domain service and class operation that can be invoked (fromPAL) from another task, a corresponding method is automatically generated to provide afunction that can be directly invoked at the code level from any Task. A nonlocalDispatchercreates an IncidentHandle to its corresponding domain service or class operation and deliversthis incident.Realized Task – a task that was not identified as part of the deployment of model elements,and is controlled either by PathMATE mechanisms or realized code independent of the ModeledTasks. Realized tasks may be started by PathMATE to support specific mechanism execution –eg communication receiver and sender tasks – or may be started and managed by completelyby realized code and therefore are unknown to PathMATE mechanisms. For the purposes ofthis document, any reference to a Realized Task from here forward will be explicitlydesignated.2.1.2 ProcessProcess – Generally contained within an executable, a Process is a set of one or more Tasksthat can run interleaved with each other, or genuinely in parallel, controlled by a singlescheduler, and sharing a memory space. Separate Processes cannot address the same generalmemory space. PathMATE supports the external identification of a process via IP address andport. One or more Processes may run on a Processor. Some deployment environments(RTOSs) will only support a single Process running on a single processor. For the purposes ofthis technical note in this scenario where a Processor is executing a single executable with oneor more Tasks will be termed to have a single Process executing. One instance of thePfdProcess class (SW_Process in C) manages each Process that is executing modeledelements.Process Identifier (PID) - A numerical value used to uniquely address each process instancewithin an intercommunicating set of PathMATE-based Processes. This is specified in theProcess Configuration Table and can be used as part of a Routing Parameter.Process Type - A “type” of executable built with a potentially unique set of PI-MDD elements,and with a specific set of conditional compiler directives, deployed together as a Process.Typically each different executable is a Process Type. One or more instances of a Process Typemay be running at any give time in a system, each with their own unique Process Identifier.Model elements are deployed to a Process Type via the ProcessType marking.Process Configuration Table – A table constructed at runtime within each PathMATE Processthat identifies (Process Type and Process ID) itself and the process instances that it canconnect to. This table is either loaded from a file at startup time or may be constructed frominterprocess (socket) messages received throughout system processing. The ProcessConfiguration Table in each process instance must have consistent subsets of information withall other Process that it can connect to.2.1.3 ProcessorProcessor – a distinct physical processor, that may run one of more Processes. For thepurposes of this technical note the Processor is a secondary concept, and the Process is usedto bound deployed elements that run on a specific Processor. 2
  6. 6. Technical Note: Distributed Deployment2.2 Configurations and Connections2.2.1 Single TaskFrom an application perspective, the work of the system is done in the Tasks by PIMprocessing – model actions executing. Each Task performs this work by dispatching Incidentswhich may cause actions to run. The Incidents are held in an Incident Queue. Figure 1: Default Single-Task System Topology2.2.2 Multi TaskIf two or more Tasks run within a Process, then a task-safe Inter-Task Incident Queue (ITIQ) isused to queue Incidents between Tasks. PIM Processing may generate Incidents that arequeued locally, or on another Task’s ITIQ. Figure 2: Example Two-Task System Topology 3
  7. 7. Technical Note: Distributed DeploymentThis internal Task structure is common for all Modeled Tasks. For simplicity from here forwardTasks will be shown as a simple outline and name: Figure 3: Simplified Task Symbology2.2.3 Multi ProcessWhen a System is deployed to two or more Processes, TCP/IP sockets are used to conveymessages between them. Incidents – service handles or events – are send between processesand dispatched upon receipt to PIM tasks where they initiated PIM processing. Each incident tobe sent is converted to a PfdSockMessage through Serialization, where the data members ofthe incident and it’s parameter values are encoded into the stream of bytes within thePfdSockMessage.In each Process a number of realized communication tasks are automatically created andmanaged by the PfdProcess. One receiver task receives messages from any source, and asender task for each destination Process instance manages the connection with and sendsmessages to that process. Each sender task has an Inter-Task Message Queue (ITMQ) whereeach outbound PfdSockMessage is queued by local PIM processing for sending. The ITMQ isimplemented with a PfdSyncList containing instances of PfdSockMessageOut. 4
  8. 8. Technical Note: Distributed DeploymentIn the topology shown below there is a single instance of ProcessType MAIN and a singleinstance of ProcessType DEVICE_CONTROL. Figure 4: Multi-Task, Multi-Processor Topology 5
  9. 9. Technical Note: Distributed DeploymentThis internal Process structure is common for all Processes. For simplicity from here forwardProcess will be shown as a simple outline and name, containing only their Modeled Tasks: Figure 5: Simplified Process Symbology2.2.4 Multi Process SPMDIn some systems there are can be more than one instance of a Process of a singleProcessType. This general capability is termed Single Process Multi Deployment (SPMD).Often each different instance is deployed to its own processor. Note the varying PIDs in thediagram below. Figure 6: SPMD Deployment2.3 Modeling in the Multi-Process WorldWith the underlying asynchronous nature of PI-MDD model element interactions and theexplicit PI-MDD preference of asynchronous interactions at the domain level (and below),proper PI-MDD models should generally be well formed for a range of deployment options.Certainly this is a solid start, and leaves the Application Modeler and the Target Deployment 6
  10. 10. Technical Note: Distributed DeploymentDesigner on good joint footing to develop and effective deployment. But at some point theremay be aspects of the PI-MDD model that may need adjustments to facilitate properdeployment.2.3.1 NodeManagementIn systems that fully utilize complex multi-processing target environments, often times there isexplicit interaction between high-level application control (“mission-level” logic) and detailedaspects of the platform and its resources. In this realm these interactions form a bona fideProblem Space subject matter: NodeManagement. A Domain can be created to encapsulateinteractions with topology awareness, local processing resources and multi-processcommunication mechanisms. This can alleviate application domains of the need to somehowto intelligently and flexibly respond to details about topology and resources localize thesecapabilities in one component.2.3.2 Routing ParametersTo explicitly route the invocation of a domain service to specific process id/task id combination,a domain service can specify a parameter as a <<Routing>> parameter. A parameter of typeDestinationHandle or Group<DestinationHandle> can have it’s Stereotype marking set toRouting, which causes its nonlocalDispatcher to be generated with appropriate routing code,using the parameter runtime value(s).But where does a caller get the right values? Often times a class can have the proper routinginformation instantiated at startup (via a static initializer, XML file or binary instance data) anduse these attributes. The SoftwareMechanisms domain services DestinationHandleLocal() andDestinationHandleRemote() provide encoding of specified Process IDs and Task IDs intoDestinationHandles at runtime.In systems with more complex or dynamic topologies, a NodeManagement domain canmaintain the appropriate body of routing data. It can publish services to provide this data atthe level of abstraction appropriate to preserve the application’s independence of specifictopologies. The caller can then go to NodeManagement to get timely and appropriate routinginformation.2.3.3 Intertask Contention and the PfdSyncListA simple starting point for deploying a domain within a Process Type is to allocate the domain– in its entirety – to execute within a single task. This way there is no contention betweentasks because domain share no resources that they need to explicitly protect.However there are legitimate design contexts where it is advantageous to deploy a singledomain to multiple tasks in a single process. In this context a danger emerges when anelement of the domain accesses a shared assets from one task when the same asset can beaccessed from another task. In this case a protection mechanism is required. These sharedassets are class instance containers – both for an association (across from a many participant)and for a class instance population. The ThreadSafe marking is supported on the Associationand the Class. Setting it to “T” will generate a PfdSyncList for the instance container, whichuses an internal PfdReentrantMutex to make access safe for intertask use.3 Deploying to Distributed TopologiesStructural Design is the PI-MDD activity where the execution units for the deploymentenvironment are identified and model elements are allocated to them. 7
  11. 11. Technical Note: Distributed Deployment3.1 Structural Design ElementsThe construction, interconnection and deployment of multi-processor systems from PI-MDDmodels requires: - The Model: While the Domain is the primary initial focus for deployment, elements within a domain can also be separately allocated to their separate execution units, including the Domain Service, Class, Class Operation, and State. - The Topology: Tasks and process types are identified via markings ProcessType and TaskID, applied to a range of model elements. - Generated Code, Projects: While general mechanism layer code exist to support Distributed Deployment, nothing specific to any topology exists until after Transformation. Markings drive the generation of build folders for each ProcessType and code tailored to the target topology. - Run-Time Data: Each specific instance of a Process is identified via a Process ID in the Process Configuration Table. While the actual image file code content for each instance of a given ProcessType is identical (they all use the same executable file), individual Process instances can be configured to behave differently via class instance and link data. These can be captured in XML or binary instance data files which are deployed with the correct process instance and/or working directory.3.2 Topology Identification and AllocationThe identification of execution units starts with simple default allocations. If no topologymarkings are specified at all, the default configuration is a single ProcessType named “MAIN”,with a single Task named “SYS_TASK_ID_MAIN”. Figure 7: Default System TopologyIn this configuration the system project files are generated with compiler symbol settings thathide the inter-task and inter-process code in the PathMATE Mechanisms layer. While realizedcommunications code can always be included in the system, it will not have any PathMATEgenerated communications mechanisms.3.2.1 Task IdentificationAdditional Tasks are identified by the “TaskID” marking, which can be applied to analysiselements of type Domain, DomainService, Object, or ObjectService. The default allocation forDomains is SYS_TASK_ID MAIN. The default TaskID for all other elements is the TaskID oftheir containing element. 8
  12. 12. Technical Note: Distributed DeploymentA TaskID can be set to any identifier name, however by convention all fixed task ids are in theform SYS_TASK_ID_<name>. The task id SYS_TASK_ANY has special meaning, indicating allactions in the marked element are executed locally within the calling task.An additional special TaskID value, DYNAMIC, indicates a new task is started, or retrieved fromthe task pool and the object/service is run in that task. Instances of DYNAMIC classes Thisrich capability is described in the separate Tech Note “PathMATE Dynamic Tasks”.Example: Deploy a system to a topology with two tasks: SYS_TASK_ID_MAIN andSYS_TASK_ID_LOGGING. The following marking deploys the Instrumentation domain to the‘LOGGING task, thereby causing the generation of a multi-task system: Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGINGThis one marking line causes the system to deploy in this configuration, with theInstrumentation domain actions running in the “PIM processing” bubble in theSYS_TASK_ID_LOGGING task, and the remainder of the system running in the “PIMprocessing” bubble in the SYS_TASK_ID_MAIN task: Figure 8: One Process Two-Task System TopologyAn invocation to an Instrumentation domain service from PIM processing (a model action)within the SYS_TASK_ID_MAIN task domain is generated as a call to that domain service’snonlocalDispatcher counterpart. This nonlocalDispatcher creates a PfdServiceHandle to thetarget service on the SYS_TASK_ID_LOGGING task and delivers it – placing it on the ITIQ forSYS_TASK_ID_LOGGING.3.2.2 Task PrioritiesThe enumeration pfdos_priority_e defined in pfd_os.hpp specifies the task priorities that aregenerally available:  SYS_TASK_PRIORITY_HIGHEST  SYS_TASK_PRIORITY_HIGHER  SYS_TASK_PRIORITY_NORMAL 9
  13. 13. Technical Note: Distributed Deployment  SYS_TASK_PRIORITY_LOWER  SYS_TASK_PRIORITY_LOWESTModeled TasksAny model element that can be marked with TaskID can also optionally be marked withTaskPriority, specifying one of the above values. The default isSYS_TASK_PRIORITY_NORMAL.NOTE: all model elements that explicitly set a TaskID must all have the same TaskPriority.Mechanisms Realized TasksThe priorities of realized tasks started by PathMATE mechanisms are controlled by the followingcompiler symbols. task priority symbol description default value PFD_MAIN_OOA_THREAD_PRIORITY OOA processing task SYS_TASK_PRIORITY_NORMAL PFD_RECEIVER_THREAD_PRIORITY TCP receiver task SYS_TASK_PRIORITY_NORMAL PFD_TRANSMIT_THREAD_PRIORITY TCP sender task SYS_TASK_PRIORITY_NORMAL PFD_IE_THREAD_PRIORITY Spotlight connection task SYS_TASK_PRIORITY_LOWEST PFD_INPUT_THREAD_PRIORITY Driver input task SYS_TASK_PRIORITY_NORMALThe following definition pattern in pfd_os.hpp supports the external specification of a priorityfor each type of realized task: #ifndef PFD_TRANSMIT_THREAD_PRIORITY #define PFD_TRANSMIT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL #endifIn this manner the system Defines marking can be used to override default realized taskpriorities.3.2.3 Task StackBy default, the stack size allocated to each Task is controlled by OS defaults. When the task isstarted a 0 is passed into the OS call and it determines the size of the stack.You can specify a default stack size for all Tasks via the DefaultStackSize system marking, eg: System,SimpleOven,DefaultStackSize,500000In addition to controlling size, the actual memory allocated for use as the task stack can beallocated explicitly, allowing custom realized code to monitor it for overrun, etc. By markingthe system with a non-0 DefaultStackSize and defining the compile flagPATH_ALLOCATE_EXPLICIT_STACK, a stack is explicitly allocated in pfd_start_task_param(). Ifthe Task being started is a modeled task, the instance of PfdTask that it is running has it’sstack data members set with this info. (Platform support may vary; inspect pfd_os.cpppfd_start_task_param() to see if your platform is supported.)3.2.4 ProcessType DefinitionBy default a single ProcessType is generated – MAIN. Build file generation produces a singlebuild folder called MAIN, and produces a single executable. By default all domains and domainservices are allocated to the ProcessType MAIN. Additional ProcessTypes can be defined byallocating a domain or domain service via the ProcessType marking. For example Domain,*.ExternalDeviceControl,ProcessType,DEVICE_CONTROLOnce more ProcessTypes are defined (in addition to MAIN) code is generated to communicatebetween multiple processes. Each ProcessType generates a build folder with the ProcessTypename, and generates code that starts the domains and tasks configured for that ProcessType. 10
  14. 14. Technical Note: Distributed DeploymentThe code also automatically starts the PfdProcess’s realized communication tasks to handleinter-process communications.With the definition of ProcessTypes, PfdIncident::deliver() – used by nonlocalDispatchers – isextended where needed to handle inter-process invocation scenarios, automatically using thePfdProcess’s realized communication tasks.Example: Deploy a system to a topology with two ProcessTypes. ProcessType MAIN has twotasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_LOGGING. ProcessType DEVICE_CONTROLhas two tasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_REGISTER_IO. The following markingdeploys the DeviceControl and HardwareIF domains to the DEVICE_CONTROL ProcessType,thereby causing the generation of a multi-process system: Domain,*.DeviceControl,ProcessType,DEVICE_CONTROL Domain,*.HardwareIF,ProcessType,DEVICE_CONTROLBecause all other domains by default are allocated to ProcessType MAIN, and we wantSoftwareMechanisms services to run locally wherever they are called, we allocate this domainto ProcessType ANY, TaskID SYS_TASK_ANY: Domain,*.SoftwareMechanisms,ProcessType,ANY Domain,*.SoftwareMechanisms,TaskID,SYS_TASK_ANYTo retain the Instrumentation deployment configuration, and to run the HardwareIF domain inits own task we add these markings: Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGING Domain,*.HardwareIF,TaskID,SYS_TASK_ID_REGISTER_IOThese 6 markings cause the system to deploy in this configuration: 11
  15. 15. Technical Note: Distributed Deployment Figure 9: Example Multi-Task, Multi-Processor TopologyTo specify a default process type other than MAIN, the default ProcessType can be changed viathe system marking DefaultProcessType, for example: System,*,DefaultProcessType,MY_PROCESS3.3 Topology Definition – the Process Configuration Table 12
  16. 16. Technical Note: Distributed Deployment3.3.1 Single Process Single DeploymentOnce an executable is built for each ProcessType, one or more instances of the executable maybe started. By default PathMATE assumes a simple configuration where a single instance ofeach ProcessType is run. The PathMATE PfdProcess class has a Process Configuration Table tokeep track of the configuration of Process Instances that run in the system. Each Processtracks the processes it can connect to with the following information: Process ID ProcessType IP address TCP port Table 1: Process Configuration Table FieldsA key stop in understanding this Process Configuration Table is in the generatedgc/sys/system_indices.hpp, where the ProcessTypes constants are defined: /* define process types for the system */ enum { SYS_PROCESS_TYPE_MAIN = 0, SYS_PROCESS_TYPE_DEVICE_CONTROL = 1, SYS_PROCESS_TYPE__count = 2 };The generated code includes a default Process Configuration Table that allows one instance ofeach ProcessType. The example system depicted in Figure 9: Example Multi-Task, Multi-Processor Topology has the following default Process Configuration Table: Process ID ProcessType IP address TCP port 0 0 127.0.0.1 5501 1 1 127.0.0.1 5502 Table 2: Default Process Configuration Table3.3.2 Single Process Multi Deployment (SPMD)Some system configurations call for more than one instance of a given ProcessType beexecuting at the same time. Essentially this permits two or more entries in the ProcessConfiguration Table to specify a given ProcessType.In cases where a single instance of a given ProcessType is configured, the system can continueto use implicit routing of services to that process based only on ProcessType, as was done inthe Single Process Single Deployment configuration. However for cases where two or moreinstances of a given ProcessType are configured, the system requires a way for the applicationto control in which process instance a given services will run.A Domain Service with a parameter marked as a Routing Parameter routes the serviceinvocation to the Process ID and Task specified by this parameter. In this manner a PathMATEsystem can utilize multiple Process Instances of the same Process Type.3.3.3 Process Configuration TableTo specify a custom target topology a Process Configuration Table is constructed in a comma-separated file, specifying a row in the table for each Process instance. Each row must have aunique Process ID and combination of IP address and TCP port.This example table shows our system with 4 instances of the DEVICE_CONTROL processrunning simultaneously, as depicted below (SimpleOven_with_4_DCs.txt): 13
  17. 17. Technical Note: Distributed Deployment Process ID ProcessType IP address TCP port 0 0 43.2.0.1 6000 1 1 43.2.1.21 7001 2 1 43.2.1.31 7002 3 1 43.2.1.41 7003 4 1 43.2.1.51 7004 Table 3: Example SPMD Process Configuration Table Figure 10: Example SPMD DeploymentThe location of the Process Configuration Table file is provided to the process executable at runtime via the “-config” command line arguments, eg: SimpleOven-DEVICE_CONTROL.exe -config SimpleOven_with_4_DCs.txtA simple strategy to ensure each Process Instance has process configuration informationconsistent with all other Process Instances is to build a single Process Configuration Table fileand provide it to all Process Instances when they start. However if some subsets of ProcessInstances do not communicate with other subsets of Process Instances, each Process Instancecan have it’s own version of Process Configuration Table. Not all copies need to contain allProcess Instance entries. The following rules apply:  When specified, each Process Instance (identified by Process ID) must specify the same Process Type, IP Address and TCP port in all Process Configuration Table files  A Process Instance can only send messages to a Process Instance it knows about via its own Process Configuration Table file 14
  18. 18. Technical Note: Distributed Deployment3.3.3.1 Additional Fields – UDP Port and Spotlight PortTo support external realized code that may require UDP port information and to allow themanual specification of Spotlight ports, the Process Configuration Table carries two additionalfields for this information: Process ID ProcessType IP address TCP port UDP port Spotlight Port Table 4: Additional Process Configuration Table FieldsThe Spotlight port is used by the Spotlight instrumentation code. In addition, these staticmethods are provided by the PfdProcess class for realized code-level access to the topologyinformation: // PROCESS TABLE ACCESSORS // Get IP address for specified process static int getPidFromIpAddress(int ip_address); // Get IP address for specified process static int getIpAddress(DestinationHandle dest); // Get TCP port for specified process static int getTcpPort(DestinationHandle dest); // Get UDP port for specified process static int getUdpPort(DestinationHandle dest); // Get Spotlight port for specified process static int getSpotlightPort(DestinationHandle dest); // Get Spotlight port for current process static int getSpotlightPort();3.3.3.2 Adding to the Process Configuration Table at Run TimeImmediately preceding the first message to carry application data, a connect message is sentto the destination Process Instance. The connect message carries the sender’s PID, IP addressand TCP listener port. If the destination Process Instance did not already have an entry in itsProcess Configuration Table for the sender Process Instance, one is added.3.3.3.3 Minimal Process Configuration TableTo facilitate deployments to targets without files systems or otherwise without topologyconfiguration files, a Process Instance can discover and build their system topology table at runtime. To suppress the default process configuration, start the process executable with the “-pid” and “-port” command line argument,s eg: SimpleOven-DEVICE_CONTROL.exe –pid 4 –port 7004Do not specify a process config table. Other processes connecting to it will add to its internalprocess config table via the information provided in their connect messages.Key limitation: a Process Instance with a Minimal Process Configuration Table cannot sendmessages to other processes until those processes send a message to this Process Instancefirst.4 Messaging and Connectivity4.1 ElementsThe fundamental unit of messaging in a PI-MDD model is the IncidentHandle. This is aPathMATE Mechanism that identifies a specific domain service, class operation or signal to beinvoked/generated, and carries the parameter values needed for any input parameters defined 15
  19. 19. Technical Note: Distributed Deploymentby that service. An asynchronous mechanism, IncidentHandles cannot be constructed for aservice/operation with return values or output parameters. IncidentHandles are often referredto by the term Incident. There are two subtypes of IncidentHandle – the Event and theServiceHandle. ServiceHandles handle domain service and class operation invocations, so theyare the type of IncidentHandle most commonly encountered.In a PI-MDD model the invocation of a domain service or class operation is specified in PALwith a construct that looks just like a synchronous function/procedure/method invocation fromcommon implementation languages. Depending on the marking of the model for topology, thetasks/process context for the caller may be in a different execution unit (task/process) thanthe action being invoked. This requires the resolution of the PAL invocation with a range ofpossible implementations. The following term classify these resolutions: - Local: the caller and the target service/operation are within the same task - Inter-task: the caller and the target service/operation are within the same process, but between different tasks - Inter-process: the caller and the target service/operation are between different processes (there is no distinction between node-local inter-process and inter-processor inter-process communications)incident type locality communication mechanismOperation Local synchronous function/method invocation; localInvocation Local IncidentHandle dispatch Inter-task convert(*1); queue on target task ITIQ convert(*1); send via socket; upon receipt queue on Inter-Process target task ITIQIncidentHandleCALL Local local dispatch Inter-task queue on target task ITIQ send via socket; upon receipt queue on target task Inter-Process ITIQNOTES:*1 - The operation invocation is converted automatically to an IncidentHandle. Table 5: Incident Communication Mechanisms4.2 Connection Initiation and ReconnectionWhen a Process starts up, the Process Configuration Table is established as outlined in section3.3.3 Process Configuration Table. For each row in the Table a Connection is created tomaintain the current state of the messaging connection to each known remote process.However at this time no actual connections to other Process instances are started. Insteadthey are started on demand, when an action in the local Process causes a message (incident)to be sent to a remote Process.4.2.1 Connect-on-Initial-SendWhen an outbound message is queued, the state of the Connection to the destination Processis checked. If it is not connected at this time connection is initiated. This initial outboundmessage is held in a queue until the connection is successfully established. Then the message,along with any others that may have been queued up, are send to the destination. Once all 16
  20. 20. Technical Note: Distributed Deploymentare sent, the connection is left up – in a usable state. Later as other messages are send tothis same destination, they can simply be sent using the established connection.4.2.2 Retry Initial Connection DelayIf the initial attempt to establish a connection to a remote Process fails, reconnection will beattempted every PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds. If not specifiedby the user, this compiler symbol has a default value of 1000.4.2.3 Reconnecting an Established ConnectionIf a connection fails that had already been established and is in use, a priority is put on tryingto quickly restore it. A limited number of reconnection attempts are tried immediately afterfailure is detected, without any delay between them. The number is controlled by the compilersymbol PATH_IMMEDIATE_RECONNECT_MAX_COUNT. If not specified by the user, this compilersymbol has a default value of 5. If the connection cannot be reestablished withinPATH_IMMEDIATE_RECONNECT_MAX_COUNT iterations, reconnection attempts are continued everyPATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds.4.2.4 Inter-Send DelayThe default task priorities place sender tasks at a lower priority, and this is expected to allowfor PIM processing to continue even if sender tasks are fully occupied by outbound messagetraffic. To facilitate the even distribution of message activity across all sending tasks and thereceiver task, a “sleep” (via pfd_sleep) of 0 milliseconds is used after each individual socketsend call to release the processor if needed. This sleep also happens between individual socketmessage packet sends for large messages. (large as defined by your current TCP-IP stackconfiguration).If the user wishes to further throttle outbound message traffic,PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be specified. If not specified by theuser, this compiler symbol has a default value of 0.Alternatively if the user wants to eliminate the inter-packet “sleep” altogether,PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be set to -1, and the pfd_sleep call isskipped.4.3 Outbound Message QueuesWhen a PIM processing Task delivers an Incident to be sent interprocess, it is serialized into aPfdSockMessageOut and placed on the outbound message queue for the Connectioncorresponding to its destination. The queue is a PfdSyncList, and is limited in length based onthe current socket connection state of the Connection.PATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT specifies the maximum number of messagesheld by the queue when disconnected. If not specified by the user, this compiler symbol has adefault value of 128. PATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT specifies the queue limitwhen connected, and its default is 1024.If PIM processing attempts to enqueue an outbound message when the outbound messagequeue is at its capacity, the oldest message is discarded.4.4 Interprocessor and EndiannessThe inter-processor communication pattern is the same as the inter-process case with socketsproviding connectivity, but an additional concern for inter-processor communication is byte 17
  21. 21. Technical Note: Distributed Deploymentordering. In an effort to ensure consistency of communications and readability of all messagesbetween processors of both types of endian-ness, all messages are constructed with NetworkByte Ordering (big endian). The C++ industry standard approach for serialization with aparent serializable class and a data factory for deserialization is implemented with thePfdSerializable mechanism class. This perform the proper message construction and decodingautomatically for all PI-MDD modeled elements and built-in types.4.4.1 Build-in Type Data Item SerializationIncident parameter values are serialized and transmitted between Processes automaticallywhen the invocation of the target service/operation/event crosses Process boundaries.4.4.1.1 Special Type HandlingTo ensure complete encoding of a data item of a user defined type that is implemented as a 64bit integer, the user defined type is marked with an ExternalType value of “long long” or“unsigned long long”.Real values are implemented with the double type, and are serialized with the IEEE Standardfor Floating-Point Arithmetic (IEEE 754).An IncidentHandle passed as a parameter is serialized in its entirety – including all of its ownparameter values – and sent to the destination.In the case of the built-in type Handle and user-defined pointer-based types derived from it,only the pointer value will be transmitted. The expectation is Handle pointers are not bedereferenced on anywhere except where they were created.4.4.2 User-Defined Type Serialization – Advanced Realized TypesThe user may define a class at the realized code level that can implement a model-level user-defined type. If data items (parameters) of this type end up crossing process boundaries,PathMATE serialization mechanisms may be applied to aid in this communication.An Advanced Realized Type (ART) is created by inheriting from the PathMATE inheriting fromthe PfdSerializable mechanism class from your realized class. The user then provides theirown class-specific implementations for the virtual serialization and deserialization methodsspecified in PfdSerializable. In addition the model-level user-defined type is marked withSerializeable = TRUE.For C++ and Java, the underlying implementation class for the data type must inherit from thePfdSerializeable class, and implement the virtual methods insertToBuffer andextractFromBuffer. Network-safe serialization and de-serialization functions are provided inmsg_base.hcpp/hpp for all supported serializable scalar types.In C, the following additional properties are required to specify the serialization functions: - SerializeFunction=<name of serialize function> - DeserializeFunction=<name of deserialize function>These serialize and deserialize functions must match the following function pointer typedefinitions defined in sw_msg_base.h:typedef void (*sw_serialize_function_t)(message_buffer_t *msg_buf, int *msg_len, void* target, bool_t is_ascii); 18
  22. 22. Technical Note: Distributed Deploymenttypedef void* (*sw_deserialize_function_t)(message_buffer_t *msg_buf, int *msg_len, bool_t is_ascii);Additional (existing) markings are also helpful in providing all the information needed toconveniently apply these capabilities: - ExternalType=<implementation type this user-defined type maps to> - IncludeFile=<name of file defining above ExternalType>4.5 Communication ErrorsThe Software Mechanisms Error Registrar is the central point for error handler callbackregistration and error reporting. Errors are grouped, and error handlers are registered forgroups. If no user-defined error callback has been registered for a group and an error isreported against the group, a default error handler is applied. In addition to pre-defined errorgroups, the user can define their own error groups and error codes.4.5.1 Multi-TaskThe Error Registrar provides a process-wide point of error notification registration forapplication-level domains (and realized code). Model-level callbacks can be registered fornotification of an error is received under a specified ErrorGroup.4.5.2 Model-Level Interface – SW:RegisterForErrorGroupThe service SoftwareMechanisms:RegisterForErrorGroup(Integer error_group, IncidentHandleerror_handler) handles callback registration for any group of errors reported viaSoftwareMechanisms::ReportError() and from built-in mechanism-level error reporting. Errorsare grouped by type, and an error handlers is registered for a specific group. If no user-defined error callback has been registered for a group and an error is reported against thegroup, a default error handler is applied.User-provided error handlers must conform to the ErrorHandle Incident Profile - publishing thesame parameters as the SoftwareMechanisms:DefaultErrorHandler service (Integererror_group, Integer error_code).In addition to pre-defined error groups, the user can define their own groups.If a communication error happens the provided callback will loaded with values forerror_group and error_code, and then called. For interprocess communication errors, theSW_ERROR_GROUP_COMMUNICATIONS is used. This group includes the following errorcodes: SW_ERROR_CODE_COMMUNICATIONS_ACCEPT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_ADDRESS_FAILURE, SW_ERROR_CODE_COMMUNICATIONS_BIND_FAILED, SW_ERROR_CODE_COMMUNICATIONS_CONNECT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_DISCONNECT, SW_ERROR_CODE_COMMUNICATIONS_INCONSISTENT_CONNECTIONS, SW_ERROR_CODE_COMMUNICATIONS_LISTEN_FAILED, SW_ERROR_CODE_COMMUNICATIONS_NETWORK_INIT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_OTHER, SW_ERROR_CODE_COMMUNICATIONS_RECEIVE_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SEND_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_CREATE_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_SHUTDOWN_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_ERROR, SW_ERROR_CODE_COMMUNICATIONS_TIMEOUT, SW_ERROR_CODE_COMMUNICATIONS_CONNECT_MESSAGE_MISMATCH 19
  23. 23. Technical Note: Distributed DeploymentSee error_codes.hpp for the most complete list of pre-defined error codes.4.6 Generated Topology Summary ReportEach time code is generated, TopologyReport.txt is generated into the _info subfolder of thedeployment folder. This is the system topology report emitting the ProcessType and TaskID ofall deployable elements. This report can be helpful for complex systems as a definitivereference of the actual ProcessType and TaskID of each item, resolving potential ambiguitieswith defaults and multiple properties files.4.7 Trace DebuggingCompiler preprocessor definitions are used to control trace debugging – printing to the standard error streamcerr.  PATH_DEBUG_INTERPROCESS_HL – Enables trace debugging for summary-level interprocess processing  PATH_DEBUG_INTERPROCESS – Enables trace debugging for all levels of interprocess processing (Automatically activates PATH_DEBUG_INTERPROCESS_HL).  PATH_DEBUG_INTERPROCESS_MSG – Enables trace debugging for all interprocess message sending and receipt  PATH_DEBUG_TASK_TRACE – Enables trace debugging for intertask processing4.8 Realized High-Speed CommunicationsIn general each outbound socket message is created by taking an outbound incident andserializing it into a PfdSockMessage, as identified in section 2.2.3 Multi Process. In somespecific cases a message is sent repeated times with very little or no changes to its contentsbetween transmissions. If this happens frequently or with very large incident parameter ARTpayloads, the designer may discover the complete re-serialization of the incident is too slow tomeet message processing deadlines.In these cases, it can be possible to avoid this repeated serialization overhead by writingrealized code that constructs a PfdSockMessage and sends it directly with PfdTopology::enqueueOutboundRawMessage(). This realized code can create an instance of aPfdSockMessageOut, send it, poll for send completion, then modify specific data values withinit, and resend this same instance all with relatively high efficiency. This process sidesteps thesafer and generally more convenient Incident-based approach, but also avoids that approachsbuffer allocation and full serialization processing.Steps:  Create a raw socket message PfdSockMessageOut instance, usually by manually serializing an Incident in realized code  Call deferDeletion() on the raw message to prevent it from being deleted after each send.  Save off a pointer to this raw message for repeated use  Enqueue the raw message by calling PfdProcess::sendRawMessageInterProcess().  Before updating specific data values in the raw message for a subsequent send, call isComplete() on the message and ensure it returns TRUE  Update specific data values in the raw message for the next send  Send again with PfdProcess::sendRawMessageInterProcess().  dont forget to put your toys away when play time is over: delete raw_message; 20
  24. 24. Technical Note: Distributed Deployment 21
  25. 25. Technical Note: Distributed DeploymentA. Thread Proliferation in Hand Written Systems Software development for complex systems since the dawn of 3gls (FORTRAN, COBOL, C, Pascal, Ada, C++, Java, etc) for high-performance systems is expressed nearly universally as long synchronous sequences of subroutine/procedure/function/method invocations. This serialization of processing is more a product of the single- dimensionality of these programming languages than any inherent simplicity in the problem spaces being addressed. The advent of multitasking and multiprocessor systems has driven 3gl programmers to break single-dimensional synchronous processing into separate sequences – threads. This decomposition has met varying levels of success, and introduced specific system partitioning techniques trying to take advantage of “apparent” parallelism offered by OS-level tasks. The subsequent rise in available of multi-core systems (and their “actual” parallelism) has further spurred the this move to use multi-threading. As development organizations gain proficiency with OS-level multi threading and with the absence of any other way to achieve parallelism (the fundamentally synchronous nature of programming languages hasn’t changed) typical complex system architectures have experienced a sharp rise in the number of OS-level threads employed. With this rise comes the computational costs of these complex OS mechanisms, and the development and maintenance costs of more complex designs. In many cases an overabundance of threads has resulted in systems with lower performance even with more capable processors and memory. PI-MDD offers an alternative to some of this harmful thread proliferation.PI-MDD, Asynchronous Domain Interactions, and The Event-DrivenParadigm The use of PI-MDD models breaks some of the constraints of synchronous sequences of subroutine/procedure/function/method invocations. Domain Modeling pushes system design from the very beginning to a base of primarily asynchronous interactions. The direct support of the callback in the UML Action Language via the IncidentHandle, and with state machines available within domains, there are a wide selection of asynchronous elements to augment the basic synchronous function/method call. With PathMATE automation the model-level function call (to a domain service or class operation) can also be generated with an implementation that uses an asynchronous IncidentHandle as needed. These asynchronous primitives break the simplistic lock that synchronous programming languages put on basic behavioral expression. The result of these asynchronous forms of expression is a PI-MDD system that has a large number of simple entities operating in asynchronous independence from each other, even when deployed to the same OS thread of control. OS threads are no longer needed to work around blocking calls or to interweave long chains of operations. Augmenting this fundamentally asynchronous form of behavioral expression, PI-MDD components can also be deployed to separate OS threads as needed - when blocking (or “nearly blocking” – long latency) calls to realized code function must be made. Typically these are found with communication sends and receives, files operations, or calls into legacy code still organized in long, synchronous call chains. Additional tasks 22
  26. 26. Technical Note: Distributed Deployment are also applied to manage priority so a few, higher priority elements are allocated to a high priority task separate from the main task with ordinary priority. The net result of this is the allocation of much larger fragments of processing to fewer OS threads, while still realizing greater interleaving of processing through fundamentally asynchronous behavioral expression. The need for inter-task synchronization mechanisms is greatly reduced, and therefore run-time overhead and general complexity is reduced. 23
  27. 27. Technical Note: Distributed Deployment B. Marking Summary marking name applies to default value description Domain, Service, Class,TaskID Operation SYS_TASK_ID_MAIN Allocate action processing to a TaskProcessType Domain, Service <DefaultProcessType> Allocate action processing to a Process Type Indicate this DestinationHandle specifiesRouting Parameter <none> destination. "T" generates a PfdSyncList for theThreadSafe Class, Association F instance container Specifies stack size for all Tasks; 0 indicatesDefaultStackSize System 0 use OS default Allows the default ProcessType name to beDefaultProcessType System MAIN changed Domain, Service, Class, Sets the priority of the task for this analysisTaskPriority Operation SYS_TASK_PRIORITY_NORMAL element; must be a pfdos_priority_e literal Allows specification of compiler symbolDefines System, Domain <none> values in the markingsExternalType User Defined Type void* Implementation type for an ART Include file for the implementation type for anIncludeFile User Defined Type <none> ART TRUE indicates this is an ART, inheritingSerializable User Defined Type FALSE from PfdSerializable 24
  28. 28. Technical Note: Distributed Deployment C. Compiler Symbol Summary symbol name default value descriptionPFD_MAIN_OOA_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for each OOA processing taskPFD_RECEIVER_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for the TCP receiver taskPFD_TRANSMIT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for each TCP sender task Default task priority for the Spotlight connectionPFD_IE_THREAD_PRIORITY SYS_TASK_PRIORITY_LOWEST taskPFD_INPUT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for the Driver input task Define this to cause a stack to be allocatedPATH_ALLOCATE_EXPLICIT_STACK <not defined> explicitly, separate from the start task OS call Time between sender task attempts to reconnectPATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS 100 with destination process, in milliseconds Max number of times the sender task attempts to reconnect with destination process without waitingPATH_IMMEDIATE_RECONNECT_MAX_COUNT 5 between Time between interprocess message packat sends (socket send calls), in milliseconds; 0 meansPATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS 0 pfd_sleep(0); -1 means no sleep at all Max number of pending outbound messages queues for a single destination while notPATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT 128 connected to that destination Max number of pending outbound messages queues for a single destination while connected toPATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT 1024 that destination Define this to turn on high level "printf" debuggingPATH_DEBUG_INTERPROCESS_HL <not defined> of interprocess mechanisms Define this to turn on full "printf" debugging ofPATH_DEBUG_INTERPROCESS <not defined> interprocess mechanisms Define this to turn on "printf" debugging ofPATH_DEBUG_INTERPROCESS_MSG <not defined> interprocess message sending Define this to turn on "printf" debugging of PfdTaskPATH_DEBUG_TASK_TRACE <not defined> mechanics 25

×