This document discusses the development of a validation and recommendation engine for simulation workflows. The engine uses ontologies and application models to structure validation knowledge that it applies to simulation data. It can check for logical consistency and perform data transformations. The validation services can be distributed across cloud resources and orchestrated using service meshes for flexibility and scalability. Future work involves spreading validators for continuous data quality and integrating machine learning into the knowledge repository.
5. 4 / 25
Simulation
Features:
Process sequence
Analyses types
Solver parameters
Geometry
Constraints
Boundary conditions
Different types of simulations: Fluid flow, structural, thermal, …
Simulations can be very complex
Simulations can take a long time (seconds to weeks)
A complex environment
Ref: Autodesk Fusion 360 simulation startup menu
6. 5 / 25
Cloud integration
New paradigm
Unlimited storage
Favors micro-services
Libraries of interconnected capabilities
Scalable to support highly distributed computations
High availability with redundancies & accelerations
Capacity to set a high level of security
8. 7 / 25
z
Data transfers
Requirements
Internet
Cloud
Server
Secure communication
Secure message content
Assess data syntax (schema)
Validate data
Consolidate data
Propagate data
File storage
and
Database
Compute
workflow
9. 8 / 25
Our solution
Internet
Cloud
Server
File storage
and
Database
Compute
workflow
Validator
10. 9 / 25
Knowledge
Creation, storage and use
Internet
Cloud
Server
Validator
Server
Validator
Users
Subject-matter expert
Server
Knowledge
Repository
Initialize / Update
11. 10 / 25
Assist during content creation
With recommendations
JSON
orChunks
of JSON
Creation
Process
Report
&
Recommend
Validator
Generate
Validation
Successful
13. 12 / 25
Knowledge structure
Domain ontologies Application model
• Application specific
• Object orientated
• Map to expected data
• Independent
• Reusable
• Portable
e.g. geometry (mesh, polygon, vertices),
materials (mechanical, thermal properties)
e.g. Autodesk Fusion, Moldflow …
Classes used to compose
14. 13 / 25
Types of validation
Class content
Descriptive logic Axioms based on ontology content
Based on entities & relationships
Populated by the data through instances of entities
Reasoner to check coherence and consistence
Code logic Procedural programming
Simple data transformations (local)
Advanced data transformations (remote)
16. 15 / 25
Class components
Attributes
Data content required from the data pool for class validation
Dependencies
Prerequisites to determine whether the class should be validated
Conditions
Requirements to validate the class
Consolidates original data pool with additional content
Generates messages intended as recommendations
Note: Another set of recommendations comes from an interpreted form of the reasoner output.
17. 16 / 25
Execution workflow
Check
Descriptive
Logic
Check
Code Logic
Consolidate
original
data
Pool of
classes
Identify
classes to
validate
Report
Check
prerequisites
No more class
to validate
Run class validations
Inject data
into model
18. 17 / 25
Advanced data transformations
Dedicated services for workflow orchestrators such as AWS step functions.
Or by micro-service orchestration by ‘service mesh’.
A configurable, low-latency infrastructure layer designed to handle a high volume of network-based inter-process
communication among application infrastructure services using application programming interfaces (APIs)
Remote executions
Ref: nginx.com/blog/what-is-a-service-mesh
19. 18 / 25
Flexibility
Many Cloud providers
Many service mesh solutions from &
By plugin
21. 20 / 25
Our current solution
Internet
Cloud
Server
Independent
compute
units
File storage
and
Database
Service
meshes
Validator
22. 21 / 25
Validators
Spreading internally
To ensure constant data quality
Internet
Cloud
Server
Independent compute units
File storage and Database
Service meshes
ValidatorValidatorsValidatorsValidators
23. 22 / 25
Knowledge repository
Current workflow
Internet
Cloud
Server
Validator
Server
Validator
Users
Subject-matter expert
Server
Knowledge
Repository
Initialize / Update
24. 23 / 25
Knowledge repository
Envisioned capability
Internet
Cloud
Server
Validator
Server
Validator
Users
Subject-matter expert
Server
Knowledge
Repository
Big Data
ML
25. 24 / 25
Summary
Lightweight service for validation and recommendation
JSON based
Rich set of validation techniques
with Descriptive Logic and Code Logic
Well-structured and easy to use validation knowledge base
by combining Domain ontologies and Application models
Easily deployable and scalable
Initial problem targeted, optimization: Weight minimization while maintaining structural performances.
Started by looking for a global optimum and quickly faced the ugly truth, there is not just one optimum, there can be many.
Finding these optimums is a highly distributed process.
Other considerations decide the best design. Specific desired strength, space utilization, assembly requirements, look,…
The final design can only be decided by the user. The decision must be assisted by a practical interface and intelligent interactions between user and system.
What do we care or matter ?
Deal with the underlying complexity (much interleaved data to handle).
Mean to validate the data, and their interconnections for the specific scenario.
Aim to prevent the launch of lengthy computations (from seconds to days, months even) when the data is incomplete or incorrect.
A simple way to create the necessary knowledge to validate the data .. in a modular form so that it can be reused and recomposed for different applications.
Report any missing or invalid content with recommendations for quick identification and correction.
Provide justification as to why validation fail
Why did we do this work and what is different from what is existing.
Rule checking is most famous for firewall to secure communications
Traditional rule systems are usually relatively linear (one line per rule), often following a templated style (XML), with fixed validation capabilities and lengthy, hard to read or interpret rule listing.
We tried to build a complex yet easy to use experience for both the users and the subject-matter expert that create and maintain the knowledge required to validate the data.
During provisioning
The validation data is based on a parent-child class system akin to object oriented programming and allow easy derivation of capabilities and versioning that fits well stored within an ontology.
OWLAPI
The input is expected in JSON format.
This format allows a first syntax validation of the data using the JSON schema.
The schema is also useful for mapping expected data to validation classes and compose validation content.
Here are some mocks of the user interface illustrating the mapping process.
Data in injected inside the ontology
The reasoner is fast and provided quick assessment of consistency
Done for do it all.
Applied to any kind of data.
What was the problem to solve, motivations
What make this work different from other initiative