Model transformations are a key element in any model-driven engineering approach, but writing them is a time-consuming and error-prone activity that requires specific knowledge of the transformation language semantics. We propose to take advantage of the advances in Artificial Intelligence and, in particular Long Short-Term Memory Neural Networks (LSTM), to automatically infer model transformations from sets of input-output model pairs. Once the transformation mappings have been learned, the LSTM system is able to autonomously transform new input models into their corresponding output models without the need of writing any transformationspecific code. We evaluate the correctness and performance of our approach and discuss its advantages and limitations.
Introduction to IEEE STANDARDS and its different types.pptx
An LSTM-Based Neural Network Architecture for Model Transformations
1. An LSTM-Based Neural Network
Architecture for
Model Transformations
Loli Burgueño1,2, Jordi Cabot1, Sébastien Gérard2
1 Open University of Catalonia, Barcelona, Spain
2 CEA LIST, Paris, France
MODELS’19
Munich, September 20th, 2019
3. Artificial Intelligence
• Machine Learning - Supervised Learning:
Input
Output
Training Transforming
ML Input OutputML
Artificial Intelligence
Machine Learning
Artificial Neural Networks
Deep Artificial
Neural Networks
3
4. Artificial Neural Networks
• Graph structure: Neurons + directed weighted connections
• Neurons are mathematical functions
• Connections have associated weights
• Adjusted during the learning process to increase/decrease the strength of the
connection
4
5. Artificial Neural Networks
• The learning process basically means to find the right weights
• Supervised learning methods. Training phase:
• Example input-output pairs are used (Dataset)
Dataset
Training Validation Test
5
6. Artificial Neural Networks
• Combine two LSTM for better results
• Avoids fixed size input and output constraints
• MTs ≈ sequence-to-sequence arch
6
8. Architecture
• Sequence-to-Sequence transformations
• Tree-to-tree transformations
• Input layer to embed the input tree to a numeric vector
+
• Output layer to obtain the output model from the numeric vectors produced by the decoderInputTree
EmbeddingLayer
Encoder
LSTM network
OutputTree
ExtractionLayer
Decoder
LSTM network
InputModel
OutputModel
8
9. • Attention mechanism
• To pay more attention (remember better) to specific parts
• It automatically detects to which parts are more important
Architecture
InputTree
EmbeddingLayer
Encoder
LSTM network
OutputTree
ExtractionLayer
Decoder
LSTM network
AttentionLayer
InputModel
OutputModel
9
10. • Pre- and post-processing required to…
• represent models as trees
• reduce the size of the training dataset by using a canonical form
• rename variables to avoid the “dictionary problem”
Model pre- and post-processing
InputModel
(preprocessed)
InputTree
EmbeddingLayer
Encoder
LSTM network
OutputTree
ExtractionLayer
OutputModel
(non-postprocessed)
Decoder
LSTM network
AttentionLayer
InputModel
OutputModel
Preprocessing
Postprocessing
10
14. Preliminary results
• Performance
1. How long does it take for the
training phase to complete?
2. How long it takes to transform an
input model when the network is
trained?
14
15. Limitations/Discussion
• Size of the training dataset
• Diversity in the training set
• Computational limitations of ANNs
• i.e., mathematical operations
• Generalization problem
• predicting output solutions for input models very different from the training
distribution it has learn from
• Social acceptance
15