Multiple Kernel Learning based Approach to Representation and Feature Selecti...ICAC09
Multiple kernel learning can be used for image classification with support vector machines. It allows representation and feature selection in a higher dimensional kernel feature space without explicitly transforming the data. The approach involves learning an optimal combination of multiple kernel matrices computed from different representations of the image data. This leads to more discriminative classification models compared to using a single kernel.
TUKE MediaEval 2012: Spoken Web Search using DTW and Unsupervised SVMMediaEval2012
This document describes a spoken web search system that uses dynamic time warping (DTW) and an unsupervised support vector machine (SVM). It consists of 3 sections:
1) System architecture - outlines the segmentation, feature extraction, SVM method, and searching algorithm components of the system.
2) Experimental results - provides results from testing the system but no details.
3) Conclusion - the concluding remarks for the system but no specifics are given.
Presentation at International Advanced School on Knowledge Co-creation and Service Innovation 2012, Japan Advanced Institute of Science and Technology, March 1
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
iPaper@GlobIS - Interactive Paper ResearchBeat Signer
The document discusses iPaper@GlobIS, an interactive paper research project. It outlines some problems with existing applications, such as a focus on hardware over data integration. The project aims to create a general interactive framework linking paper and digital resources through a cross-media information management platform. This will allow for different forms of paper interactions, rapid prototyping, and integration of new input/output devices.
This document discusses various topics related to computer graphics and video processing, including coordinate systems, line drawing algorithms, circle generation, polygon filling, and 3D modeling. It provides explanations of techniques like Bresenham's line algorithm, the midpoint circle algorithm, scanline polygon filling, and boundary representation for 3D objects. Examples are given for 2D and 3D coordinate systems, parallel line drawing, concave polygon splitting, inside-outside testing, and representing adjacent polygon surfaces.
A Framework for Cross-media Information ManagementBeat Signer
Presentation given at EuroIMSA 2005, International Conference on Internet and Multimedia Systems and Applications. Grindelwald, Switzerland, February 2005
ABSTRACT: Nowadays a user's personal information space is fragmented into multiple repositories on their local machine as well as on remote servers. In order to enable later access to resources managed within such a cross-media information space, information has to be organised in a format that can be processed by automatic retrieval processes. We propose a general framework for personal information management based on extending a cross-media link server with supplemental metadata functionality. In addition to user generated information, our solution automatically derives metadata for classifying and associating resources based on direct interaction with the information space. Resources and metadata can be integrated by referencing external resources or information may be managed directly by the framework. The presented cross-media information management solution is not limited to a fixed set of predefined resources and can be extended based on a resource plug-in mechanism.
Model-based analysis using generative embeddingkhbrodersen
This document describes using generative models to analyze brain imaging data and distinguish between disease states. It discusses:
1. Developing neural models of pathophysiological processes and validating them with experimental data.
2. Inverting these models on imaging data from individual subjects to obtain subject-specific representations in a generative score space.
3. Using these representations to train classifiers that can distinguish patient groups, with the goal of inferring pathophysiological mechanisms in patients.
4. An example application where this framework is able to accurately diagnose stroke patients based on speech-related brain regions.
Multiple Kernel Learning based Approach to Representation and Feature Selecti...ICAC09
Multiple kernel learning can be used for image classification with support vector machines. It allows representation and feature selection in a higher dimensional kernel feature space without explicitly transforming the data. The approach involves learning an optimal combination of multiple kernel matrices computed from different representations of the image data. This leads to more discriminative classification models compared to using a single kernel.
TUKE MediaEval 2012: Spoken Web Search using DTW and Unsupervised SVMMediaEval2012
This document describes a spoken web search system that uses dynamic time warping (DTW) and an unsupervised support vector machine (SVM). It consists of 3 sections:
1) System architecture - outlines the segmentation, feature extraction, SVM method, and searching algorithm components of the system.
2) Experimental results - provides results from testing the system but no details.
3) Conclusion - the concluding remarks for the system but no specifics are given.
Presentation at International Advanced School on Knowledge Co-creation and Service Innovation 2012, Japan Advanced Institute of Science and Technology, March 1
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
iPaper@GlobIS - Interactive Paper ResearchBeat Signer
The document discusses iPaper@GlobIS, an interactive paper research project. It outlines some problems with existing applications, such as a focus on hardware over data integration. The project aims to create a general interactive framework linking paper and digital resources through a cross-media information management platform. This will allow for different forms of paper interactions, rapid prototyping, and integration of new input/output devices.
This document discusses various topics related to computer graphics and video processing, including coordinate systems, line drawing algorithms, circle generation, polygon filling, and 3D modeling. It provides explanations of techniques like Bresenham's line algorithm, the midpoint circle algorithm, scanline polygon filling, and boundary representation for 3D objects. Examples are given for 2D and 3D coordinate systems, parallel line drawing, concave polygon splitting, inside-outside testing, and representing adjacent polygon surfaces.
A Framework for Cross-media Information ManagementBeat Signer
Presentation given at EuroIMSA 2005, International Conference on Internet and Multimedia Systems and Applications. Grindelwald, Switzerland, February 2005
ABSTRACT: Nowadays a user's personal information space is fragmented into multiple repositories on their local machine as well as on remote servers. In order to enable later access to resources managed within such a cross-media information space, information has to be organised in a format that can be processed by automatic retrieval processes. We propose a general framework for personal information management based on extending a cross-media link server with supplemental metadata functionality. In addition to user generated information, our solution automatically derives metadata for classifying and associating resources based on direct interaction with the information space. Resources and metadata can be integrated by referencing external resources or information may be managed directly by the framework. The presented cross-media information management solution is not limited to a fixed set of predefined resources and can be extended based on a resource plug-in mechanism.
Model-based analysis using generative embeddingkhbrodersen
This document describes using generative models to analyze brain imaging data and distinguish between disease states. It discusses:
1. Developing neural models of pathophysiological processes and validating them with experimental data.
2. Inverting these models on imaging data from individual subjects to obtain subject-specific representations in a generative score space.
3. Using these representations to train classifiers that can distinguish patient groups, with the goal of inferring pathophysiological mechanisms in patients.
4. An example application where this framework is able to accurately diagnose stroke patients based on speech-related brain regions.
Using reduced system models for vibration design and validation.
1) Equivalent models are used to reduce complexity while preserving key behaviors through homogenization and model updating.
2) Model reduction techniques like component mode synthesis represent the system with subspace bases to enable coupling of test and finite element models.
3) Energy coupling methods allow assembly of disjoint reduced component models through computation of interface energies.
Efficient Variable Size Template Matching Using Fast Normalized Cross Correla...Gurbinder Gill
In this presentation we propose the parallel implementation of template matching using Full Search using NCC as a measure using the concept of pre-computed sum-tables referred to as FNCC for high resolution images on NVIDIA’s Graphics Processing Units (GP-GPU’s)
University Course "Micro and nano systems" for Master Degree in Biomedical Engineering at University of Pisa. Topic: Software for additive manufacturing (part1)
Moldex3D, Structural Analysis, and HyperStudy Integrated in HyperWorks Platfo...Altair
In recent years, with the increasing variety, complexity, and precision requirement on plastic products, CAE tools have been widely used for solving product design and manufacturing issues. The structural designs or molding process parameters for products can be optimized efficiently through CAE analyses. Plus the reliable and correct verification with experiments, the directions or guidance in designs or process condition settings can be provided prior to the real moldings. However, sometimes it is not efficient to find an optimized set of parameters through traditional CAE analyses. A novel integration between Moldex3D and HyperStudy allows for more quick and efficient parameter optimization which will save time, increase product quality, and increase productivity.
Also, traditional CAE analyses do not consider the molding properties influence on structural analysis, such as material property variations caused by fiber orientation and residual stresses. Accordingly, an integrated technology is proposed to bridge molding and structural analysis. Through the integration of Moldex3D and structural analysis in HyperWorks platform, the important effects from molding process can be transferred to structural analysis for more accurate and realistic predictions of the product behaviors. This integration provides a virtual product development platform for users to increase profits as well as enhance productivity.
Using R in Kaggle Competitions.
Kaggle has been the most popular data science platform linking close to half a million of data scientists worldwide. How to get yourself a decent ranking on Kaggle competitions with R programming, eXtreme Gradient BOOSTing, and a laptop. Great machine learning tools for all levels to get started and learn. Find out how to perform features engineering, tuning XGB models, selecting a sizable cross validations and performing model ensembles.
This document discusses topological data analysis (TDA), an approach to analyzing large, complex datasets. TDA applies concepts from topology to classify, visualize, and explore data. It constructs simplified representations of data called simplicial complexes that retain topological properties and are robust to noise. Key techniques in TDA include Vietoris-Rips complexes, persistent homology, and betti numbers. TDA shows promise for applications like surface reconstruction, anomaly detection, and optimizing machine learning models for big data analysis.
This document discusses the design of neural network architectures. It introduces RegNet, a design space that uses a simple linear parameterization to generate "regular" networks. The document analyzes the RegNet design space and compares it to existing networks. Key findings include that optimal depth is around 20 blocks, a bottleneck ratio of 1 is best, and a width multiplier of around 2.5 works well. RegNet models achieve state-of-the-art or competitive results across mobile, standard, and full network regimes.
Realtime Per Face Texture Mapping (PTEX)basisspace
This presentation shows the original method for implementing Per-Face Texture Mapping (PTEX) in real-time on commodity hardware. PTEX is used throughout the film industry to handle texture seams robustly while simultaneously easing artist workflow.
Graphics processing uni computer archiectureHaris456
This document discusses graphics processing units (GPUs) and their evolution from specialized hardware for graphics to general-purpose parallel processors. It covers key aspects of GPU architecture like their massively parallel threading model, SIMD execution, memory hierarchy, and how programming models like CUDA map computations to large numbers of threads grouped into blocks. Examples of GPU hardware like Nvidia's Fermi architecture are also summarized.
The document discusses the Graph500 benchmark for evaluating graph algorithms on parallel systems. The benchmark involves converting an edge list to an internal format, then performing breadth-first searches from random roots. Multiple problem sizes are defined from toy (17GB) to huge (1.1PB). Reference implementations include Octave, OpenMP, Cray XMT, and MPI codes. Untuned performance results on 128-processor Cray XMT show 3.8 trillion traversed edges per second for the toy class. The overall objective is to explore benchmarks for data-intensive, irregular computations on shared-memory platforms.
John Yates has been a programmer since 1970. In 2010, he was working at Netezza where they needed a new data analytics architecture. He proposed using a family of statically typed acyclic data flow graph intermediate representations (IRs). This involved representing the query plan as a graph of operators with data flowing along edges. The graph could then be optimized and executed in parallel across cluster nodes. Yates provided an initial implementation of this approach to prove the concept, which helped convince Netezza to adopt it. He then led the development of the new Marlin architecture at Netezza based on this IR approach.
Apache SystemML Optimizer and Runtime techniques by Matthias BoehmArvind Surve
This deck describes general framework techniques for Large Scale Machine Learning systems. It explains Apachhe SystemML specific Optimizer and Runtime techniques. It will describe data structures, DAG compilation, operator selection including fused operators, dynamic recompilation, inter procedure analysis and some ongoing research projects.
Apache SystemML Optimizer and Runtime techniques by Matthias BoehmArvind Surve
This deck describes general framework techniques for Large Scale Machine Learning systems. It explains Apache SystemML specific Optimizer and Runtime techniques. It will describe data structures, DAG compilation, operator selection including fused operators, dynamic recompilation, inter procedure analysis and some ongoing research projects.
This document provides an overview of MATLAB for students in engineering fields. It introduces MATLAB as a tool for matrix calculations and numerical computing. It describes the MATLAB environment and commands for help, variables, matrices, logical operations, flow control, scripts and functions. It also covers image processing in MATLAB, including importing and displaying images, image data types, basic operations, and examples of blending and edge detection on images. Finally, it discusses performance issues and the importance of vectorizing code to avoid slow loops.
Hadoop Summit 2012 | Bayesian Counters AKA In Memory Data Mining for Large Da...Cloudera, Inc.
Processing of large data requires new approaches to data mining: low, close to linear, complexity and stream processing. While in the traditional data mining the practitioner is usually presented with a static dataset, which might have just a timestamp attached to it, to infer a model for predicting future/takeout observations, in stream processing the problem is often posed as extracting as much information as possible on the current data to convert them to an actionable model within a limited time window. In this talk I present an approach based on HBase counters for mining over streams of data, which allows for massively distributed processing and data mining. I will consider overall design goals as well as HBase schema design dilemmas to speed up knowledge extraction process. I will also demo efficient implementations of Naive Bayes, Nearest Neighbor and Bayesian Learning on top of Bayesian Counters.
The document describes EasyEDD, a software for analyzing tomographic electron diffraction data (TEDDI) obtained from synchrotron sources. EasyEDD allows batch processing and visualization of large diffraction data sets. It stores data in a 3D grid format and includes tools for corrections, fitting, visualization and exporting results. The software combines a graphical user interface with algorithms for numerical analysis. Current functionality and future improvements are outlined.
The document discusses VLSI (Very Large Scale Integration) and the history and trends in integrated circuit technology. It explains the progression from SSI to VLSI from the 1950s to present day, with increasing numbers of transistors per chip. It also defines different integration levels and provides examples of processor sizes for different silicon process technologies.
The document provides an overview of graphics pipelines. It discusses the basic graphics pipeline which includes 3D scene, vertex fetching, vertex processing, scan conversion, pixel processing, and raster operations. It then discusses modern graphics pipelines which use programmable shaders and unified shader architectures. It also discusses moving beyond traditional pipelining through parallelism approaches like SIMD, SIMT, and MIMD. Future trends may involve more MIMD approaches and programming models similar to SPU programming. This could enable more complex data structures, algorithms, lighting approaches, and rasterization techniques.
More Related Content
Similar to Structural Dynamics Toolbox and OpenFEM, a technical overview
Using reduced system models for vibration design and validation.
1) Equivalent models are used to reduce complexity while preserving key behaviors through homogenization and model updating.
2) Model reduction techniques like component mode synthesis represent the system with subspace bases to enable coupling of test and finite element models.
3) Energy coupling methods allow assembly of disjoint reduced component models through computation of interface energies.
Efficient Variable Size Template Matching Using Fast Normalized Cross Correla...Gurbinder Gill
In this presentation we propose the parallel implementation of template matching using Full Search using NCC as a measure using the concept of pre-computed sum-tables referred to as FNCC for high resolution images on NVIDIA’s Graphics Processing Units (GP-GPU’s)
University Course "Micro and nano systems" for Master Degree in Biomedical Engineering at University of Pisa. Topic: Software for additive manufacturing (part1)
Moldex3D, Structural Analysis, and HyperStudy Integrated in HyperWorks Platfo...Altair
In recent years, with the increasing variety, complexity, and precision requirement on plastic products, CAE tools have been widely used for solving product design and manufacturing issues. The structural designs or molding process parameters for products can be optimized efficiently through CAE analyses. Plus the reliable and correct verification with experiments, the directions or guidance in designs or process condition settings can be provided prior to the real moldings. However, sometimes it is not efficient to find an optimized set of parameters through traditional CAE analyses. A novel integration between Moldex3D and HyperStudy allows for more quick and efficient parameter optimization which will save time, increase product quality, and increase productivity.
Also, traditional CAE analyses do not consider the molding properties influence on structural analysis, such as material property variations caused by fiber orientation and residual stresses. Accordingly, an integrated technology is proposed to bridge molding and structural analysis. Through the integration of Moldex3D and structural analysis in HyperWorks platform, the important effects from molding process can be transferred to structural analysis for more accurate and realistic predictions of the product behaviors. This integration provides a virtual product development platform for users to increase profits as well as enhance productivity.
Using R in Kaggle Competitions.
Kaggle has been the most popular data science platform linking close to half a million of data scientists worldwide. How to get yourself a decent ranking on Kaggle competitions with R programming, eXtreme Gradient BOOSTing, and a laptop. Great machine learning tools for all levels to get started and learn. Find out how to perform features engineering, tuning XGB models, selecting a sizable cross validations and performing model ensembles.
This document discusses topological data analysis (TDA), an approach to analyzing large, complex datasets. TDA applies concepts from topology to classify, visualize, and explore data. It constructs simplified representations of data called simplicial complexes that retain topological properties and are robust to noise. Key techniques in TDA include Vietoris-Rips complexes, persistent homology, and betti numbers. TDA shows promise for applications like surface reconstruction, anomaly detection, and optimizing machine learning models for big data analysis.
This document discusses the design of neural network architectures. It introduces RegNet, a design space that uses a simple linear parameterization to generate "regular" networks. The document analyzes the RegNet design space and compares it to existing networks. Key findings include that optimal depth is around 20 blocks, a bottleneck ratio of 1 is best, and a width multiplier of around 2.5 works well. RegNet models achieve state-of-the-art or competitive results across mobile, standard, and full network regimes.
Realtime Per Face Texture Mapping (PTEX)basisspace
This presentation shows the original method for implementing Per-Face Texture Mapping (PTEX) in real-time on commodity hardware. PTEX is used throughout the film industry to handle texture seams robustly while simultaneously easing artist workflow.
Graphics processing uni computer archiectureHaris456
This document discusses graphics processing units (GPUs) and their evolution from specialized hardware for graphics to general-purpose parallel processors. It covers key aspects of GPU architecture like their massively parallel threading model, SIMD execution, memory hierarchy, and how programming models like CUDA map computations to large numbers of threads grouped into blocks. Examples of GPU hardware like Nvidia's Fermi architecture are also summarized.
The document discusses the Graph500 benchmark for evaluating graph algorithms on parallel systems. The benchmark involves converting an edge list to an internal format, then performing breadth-first searches from random roots. Multiple problem sizes are defined from toy (17GB) to huge (1.1PB). Reference implementations include Octave, OpenMP, Cray XMT, and MPI codes. Untuned performance results on 128-processor Cray XMT show 3.8 trillion traversed edges per second for the toy class. The overall objective is to explore benchmarks for data-intensive, irregular computations on shared-memory platforms.
John Yates has been a programmer since 1970. In 2010, he was working at Netezza where they needed a new data analytics architecture. He proposed using a family of statically typed acyclic data flow graph intermediate representations (IRs). This involved representing the query plan as a graph of operators with data flowing along edges. The graph could then be optimized and executed in parallel across cluster nodes. Yates provided an initial implementation of this approach to prove the concept, which helped convince Netezza to adopt it. He then led the development of the new Marlin architecture at Netezza based on this IR approach.
Apache SystemML Optimizer and Runtime techniques by Matthias BoehmArvind Surve
This deck describes general framework techniques for Large Scale Machine Learning systems. It explains Apachhe SystemML specific Optimizer and Runtime techniques. It will describe data structures, DAG compilation, operator selection including fused operators, dynamic recompilation, inter procedure analysis and some ongoing research projects.
Apache SystemML Optimizer and Runtime techniques by Matthias BoehmArvind Surve
This deck describes general framework techniques for Large Scale Machine Learning systems. It explains Apache SystemML specific Optimizer and Runtime techniques. It will describe data structures, DAG compilation, operator selection including fused operators, dynamic recompilation, inter procedure analysis and some ongoing research projects.
This document provides an overview of MATLAB for students in engineering fields. It introduces MATLAB as a tool for matrix calculations and numerical computing. It describes the MATLAB environment and commands for help, variables, matrices, logical operations, flow control, scripts and functions. It also covers image processing in MATLAB, including importing and displaying images, image data types, basic operations, and examples of blending and edge detection on images. Finally, it discusses performance issues and the importance of vectorizing code to avoid slow loops.
Hadoop Summit 2012 | Bayesian Counters AKA In Memory Data Mining for Large Da...Cloudera, Inc.
Processing of large data requires new approaches to data mining: low, close to linear, complexity and stream processing. While in the traditional data mining the practitioner is usually presented with a static dataset, which might have just a timestamp attached to it, to infer a model for predicting future/takeout observations, in stream processing the problem is often posed as extracting as much information as possible on the current data to convert them to an actionable model within a limited time window. In this talk I present an approach based on HBase counters for mining over streams of data, which allows for massively distributed processing and data mining. I will consider overall design goals as well as HBase schema design dilemmas to speed up knowledge extraction process. I will also demo efficient implementations of Naive Bayes, Nearest Neighbor and Bayesian Learning on top of Bayesian Counters.
The document describes EasyEDD, a software for analyzing tomographic electron diffraction data (TEDDI) obtained from synchrotron sources. EasyEDD allows batch processing and visualization of large diffraction data sets. It stores data in a 3D grid format and includes tools for corrections, fitting, visualization and exporting results. The software combines a graphical user interface with algorithms for numerical analysis. Current functionality and future improvements are outlined.
The document discusses VLSI (Very Large Scale Integration) and the history and trends in integrated circuit technology. It explains the progression from SSI to VLSI from the 1950s to present day, with increasing numbers of transistors per chip. It also defines different integration levels and provides examples of processor sizes for different silicon process technologies.
The document provides an overview of graphics pipelines. It discusses the basic graphics pipeline which includes 3D scene, vertex fetching, vertex processing, scan conversion, pixel processing, and raster operations. It then discusses modern graphics pipelines which use programmable shaders and unified shader architectures. It also discusses moving beyond traditional pipelining through parallelism approaches like SIMD, SIMT, and MIMD. Future trends may involve more MIMD approaches and programming models similar to SPU programming. This could enable more complex data structures, algorithms, lighting approaches, and rasterization techniques.
Similar to Structural Dynamics Toolbox and OpenFEM, a technical overview (20)
2. Outline
• Overview of scientific activities
– Model reduction
– Damping
– Design for systems with NL vibration
• SDT, OpenFEM, FEMLink: the software
developed for the purpose
2
4. General objectives
• OpenFEM (LGPL, SDTools/INRIA/individual contributors)
• General purpose multi-physic 3D Finite Element Modeling.
• Objective all linear FEM models and some non standard (orientation maps) and non-
linear (geometric stiffness)
• A toolbox : allows experimentation in FEM development
• Commercial objective for SDTools: performance competitive with major software
(NASTRAN, Abaqus, ANSYS, …) on target applications
• Structural Dynamics Toolbox (commercial, SDTools)
• Optimized OpenFEM performance, Graphical User interfaces
• Model reduction/system dynamics (1e5 to 20e6 DOF)
• Experimental modal analysis, Test / Analysis correlation
• FEMlink : Import/export industrial modules: NASTRAN, ABAQUS, ANSYS, SAMCEF, …
• SDT Runtime: Deployment of custom and standalone applications
• Visco and rotor : specialized extensions
CAD
Meshing SDT Runtime
FEM SDT, Visco, Rotor
Simulation OpenFEM FEMlink
Tests MATLAB 4
5. FEM : OpenFEM / SDT architecture
Preprocessing FEM core Postprocessing
• Mesh manipulations • Shape function utilities, matrix • Stress computations
• Structured meshing and load assembly • Signal processing
• Property/boundary • A library of formulations … • 3D visualization (major
condition setting • Factored matrix object (dynamic extension , optimized,
• Renumbering selection of sparse library, object based)
• GUI based editing additional solvers) Export
• Linear static and time response
Import (linear and non linear)
• VTK, MEDIT
• GMSH, NetGen, • NASTRAN, IDEAS,
• Real eigenvalues
TetGen, GID, Abaqus, ANSYS,
• Optimized solvers for large
Castem, … SAMCEF
problems, superelements, and
• NASTRAN, IDEAS, • Ensight, MISS3D,
system dynamics, model
ANSYS, PERMAS, Gefdyn
reduction and optimization,
SAMCEF, vibroacoustics, active control, …
ABAQUS, MISS, • Drive other software
GEFDYN (NASTRAN, MISS)
OpenFEM, SDT, MSSMat
5
6. Meshing
Meshing is a serious business that needs to be
integrated in a CAD environment.
OpenFEM is a computing environment.
• IMPORT (GMSH, GID, CASTEM, TETGEN,
NASTRAN, ANSYS, SAMCEF, PERMAS,
IDEAS)
• Structured meshing script & GUI based
• Run meshing software : GMSH/Tetgen Driver
• 2D quad meshing, 2D Delaunay
OpenFEM, SDT/FEMLink
6
7. Formulation library
Generic compiled elements The OpenFEM
• Support any linear element without recompilation specification is designed
• 3D elasticity with full anisotropy, 2D plane
for multiphysics
stress/strain applications
• 2 & 3D acoustic fluids, fluid/structure coupling
• Heat equation 99 DOF/node
• Piezo-electric volumes, poro-elasticity
• Gyroscopic effects
999 internal DOF/element
Other compiled families
• geometrically non-linear mechanics, mechanical or
thermal pre-stress
• Hyper-elasticity, follower pressure
Other elements
• Shells. Laminated plate theory. Piezoelectric shells.
• 3D lines/points : Bar, Beam, Pre-stressed beam,
spring, bush, mass
SDT
7
8. OpenFEM : design criteria
• Be a toolbox (easy to develop, debug,
optimization only should take time)
• Optimize ability to be extended by
users
• Performance identical to good fully
compiled code
• Solve very general multi-physics FE
problems
• Be suitable for application deployment
8
9. Easy user extensions
• Object oriented concepts (specify data structures and
methods)
• But non typed data structures (avoid need to declare
inheritance properties)
Example user element
• Element name of .m file (beam1.m)
• Must provide basic methods (node, DOF, face, parent, …)
• Self provide calling format. Eg : beam1(‘call’)
[k1,m1]=beam1(nodeE, elt(cEGI(jElt),:), pointers(:,jElt), integ, constit,
elmap, node);
Other self extensions
• Material functions
• Property functions (problem formulations)
• Non-standard topology definitions
9
11. User elements
elseif comstr(Cam,'node'); out = [1 2];
elseif comstr(Cam,'prop'); out = [3 4 5];
elseif comstr(Cam,'dof'); out=[1.01 1.02 1.03 2.01 2.02 2.03]';
elseif comstr(Cam,'line'); out = [1 2];
elseif comstr(Cam,'face'); out =[];
elseif comstr(Cam,'sci_face'); out = [1 2 2];
elseif comstr(Cam,'edge'); out =[1 2];
elseif comstr(Cam,'patch'); out = [1 2];
elseif comstr(Cam,'parent'); out = 'beam1';
[k1,m1]=beam1(nodeE, elt(cEGI(jElt),:),
pointers(:,jElt), integ, constit, elmap, node);
[ID,pl,il]=deal(varargin);
pe=pe(find(pe(:,1)==ID(1),3:end); % material properties
ie=ie(find(ie(:,1)==ID(2),3:end);
% E*A nu eta rho*A A lump
constit = [pe(1)*ie(4) 0 pe(3)*ie(4) ie(4) ie(7)];
integ=ID;%matid proid
Elmap=[];
out=constit(:); out1=integ(:); out2=ElMap;
11
12. Generic compiled elements
Objective ease implementation of
• linear element families
• arbitrary multi-physic
• good compiled speed
• provisions for non linear extensions
Assumptions
• Strain ε=[B]{q} linear function of N and ∇N
• Element matrix quadratic function of strain
12
13. Generic compiled elements
During assembly init
define
• D ↔ constit
• ε ↔ EltConst.NDN
• Ke ↔ D,e (EltConst.
MatrixIntegrationRule built
in integrules MatrixRule)
EltConst=p_solid('constsolid','hexa8',[],[])
p_solid('constsolid','hexa8',[],[])
p_solid('constfluid','hexa8',[],[]) 13
14. FEM current developments
• Support shape function tricks (MITC, average strain, …)
• Extend FieldAtNode & gstate interpolation
• orientation maps (3D, contact)
• Better support of analytic
expressions
• Parallel residual
• Out of core improvements
• GUI performance
• Poroelastic
Waiting for real need
• Improve ease of development
of NL constitutive laws
• Extend parallel operations beyond assembly INRIA MACS Simplified
modeling of cardiac contraction
with OpenFEM 14
15. Substructuring & complex systems
Funding Bosch (squeal), SNECMA (full rotor), PSA (damping
design, time domain + gyro), SNCF (train/track), VALEO (joint
non linearities for lighting systems), …
External application with SDT base
Bosch-U. Stuttgart Flexible piping
15
16. SDT & Superelements
Superelement :
• group of elements identified by a name
• stored as a sub-model
• reduced {q}=[T]{qr}
• reused (sectors, slices, …)
• parameterized
SDT provided utilities
• Select in model (and dispatch constraints)
• Display full and partial
• Partial recovery for reduced, support reduced
sensors
• Reduction (Craig-bampton, free modes, many
variants, possibly parameterized)
• Node/property renumbering
16
17. Model reduction
Substructuring and complex structures
Established
• Multi-level substructuring, Parametric analysis
Current trends
• Component design within system
• Time domain NL : contact/friction, viscoelastic, …
• Using NL predictions for design
• Thermo-elastic coupling, self-heating
• Stabilize procedures for models >> 1e6 DOFs
Vibroacoustics
Established techniques
• Acoustic FEM modeling
• Coupling matrices
• Coupled predictions
• Reduction methodology (critical for heavy fluids)
Current trends
• Lining materials (poroelasticity)
• Design & robustness
• Industrial process automation
20
18. Damping procedures
• Meshing
• Material handling
• Response predictions (huge
challenge, but well established)
• Analysis tools
• Vibroacoustic optimization
21
19. Damping issues
Established techniques undergoing refinement
• Damping device (re)-meshing
• Reduction methods for direct FRF & modal
Open issues
• Device placement & shape optimization techniques
• Design methodologies (initial evaluation & validation)
• Viscoelastic behavior & non linearities (time domain simulation,
brake lining, engine supports, suspensions, …).
• Thermally challenging environments & self heating,
thermoelasticity
• Rethinking the use of piezoelectric shunts to damp vibrations
above 150C
• Improve material knowledge
• Help building convincing prototypes
23
20. Identification in SDT
• User interactive non linear
least squares in frequency
domain : most efficient in SDT
• Narrow/wide band iterations
• Some subspace & polynomial
methods available but unused
(less efficient)
• Needed development : output
only subspace methods
24
21. After ID : topology, expansion, SDM
• Topology correlation/observation
(notion of sensors is very developed
in SDT)
• Sensor placement
• Modeshape expansion : estimate
DOF based on measurements &
model
• Structural Dynamics Modification Expansion
Observation
Mode 11 (103 Hz)
Ni Sensor Static
Dynamic
tj FE mode
Mesh node 25
22. Example : structural dynamics modification
System : identified response
In
Feedback :
modification
Problem : know outputs but states
(DOF) needed for coupling
Solution
• Local model
Structure under test •Covers instrumented area
Instrumented area •Includes the modification
} Local model
• Expansion
•model based estimation
FEM of modification
•gives knowledge of states 26
23. SDM application with damping
Ovalisation à 2 lobes
Bascules
Ovalisation à 3 lobes
27
PhD thesis : Benjamin Groult
24. Why MATLAB ?
Requirements for a long term development environment
• Ease of development
– Interactive operation on data structures ☺ ☺ ☺ x
– Automated memory management (no declaration) but pre-allocation possible ☺ ☺ x
– Interactive debugger ☺ ?
– Profiler ☺ ?
• 90 % the code fully portable ⇒
– virtual machine ☺ ☺ ☺ ☺
– Performance of interpretation (in-line compiler) ☺ ? x
• Native support of math libraries ☺ ☺ x
• Native support of sparse libraries ☺ ☺ x
• Allow links of few specific libraries to C, C++, Fortran, Java
(compile only the small part that needs it) ☺ ☺ ☺ ☺
SDTools choices
• MATLAB -very stable and well documented (commercial quality)
- minimum risk for commercial multi-platform support
- the best development environment (debug, profile, …)
• JAVA for GUI development (compatible with MATLAB)
• Python : only alternative (ABAQUS, Code_Aster) but not retained because
- no stable reference installation containing all needed extensions on all platforms
- not an environment for commercial products
- doubts on performance
Problems with our choice of MATLAB
• Memory handling : argument passing without copy often critical (pre-allocation)
Matlab
but user need to pay attention
Python
Scilab
• MATLAB not perceived as capable of handling large jobs
Java
• NOT FREE : cheap licenses but many users because interactive
• FREE deployment of MATLAB Runtime
28
25. What’s involved in SDTools development
• Development FORGE support.sdtools.com
– Version control / trackers (internal discussions) /
support forum
• Test & documentation (80 % of cost)
– Reference examples (example source & non-regression)
– Documentation (650 pages) : the only way to have new
users without spending weeks training them.
– Reference & doc examples run every night
• Software
– On release per year (7 architectures)
– 90 % MATLAB, 10% C (critical for performance)
– A wide range of users (600 licenses, 15 countries) :
necessary to guarantee robustness
29
26. Thanks to …
AD SANDVIK COROMANT , AEROSPATIALE MATRA AIRBUS, AEROSPATIALE MATRA LANCEURS, AFIT ENY, ALCANTAR & BROWN , Agency for Defence
Development, Alenia Aeronautica, Finmeccanica, Arnold Air Force Base, Aérospatiale Matra Lanceurs, BAE Systems, BBN Technologies, BOFORS SA , BOSCH
SYSTEMES DE FREINAGE S.A.S, Bassin d'Essai des Carènes, Boeing Space and Communications, Bridgestone/Firestone, Bruel & Kjaer, CAMBRIDGE CONTROL Ltd,
CCS Informationssysteme GmbH, CERN, CETIM, CIRA, CNAM, CNR - ITESRE, CNR ITIA, CNRS - GDR Vibroacoustique, CNRS - L.M.A., CNRS LULI, CSIR -
defencetek, Case Western R. University. , Ceanet, Centro Ricerche Fiat, Centro de Investigacicon Cientifica de Yuca..., Chalmers University of Technology, Cybernet
Systems Co., Ltd., Czech TU, D.I.C.E Tokyo Denki University , DALHOUSIE UNIVERSITY, DISAT Università dell'Aquila, DISTART, DLR, DLR e.V., DORNIER
LUFTFAHRT, DORNIER SATELLITENSYSTEME Gmbh, Daewoo Shipbuilding & Marine Engineering Co., Ltd., DaimlerChrysler AG, Deh Yu College of Nursing & Mgmt,
Denel Aviation, Det Norske Veritas, Technical Consulting, Deutsche Bahn AG, EADS LV, ECOLE CENTRALE DE LYON, ECP, EDF, EDF , EDF - POLE INDUSTRIE,
EMBRAER, ENSAM - CER FRANCO - ALLEMAND , ENSIB, ENSIM- UNIVERSITE DU MAINE, ENSTA, Ebara Research Co, Ltd, Ecole Centrale de Lille, Ecole
Centrale de Lyon, Eindhoven University of Technology, Elliptec Resonant Actuator AG, Elliptec co. Siemens, Etablissement Technique de Bourges, F.G. UNIVERSIDAD
POLITECNICA MADRID, FH AALEN, FH Wilhelmshafen, FORSCHUNGSZENTRUM KARLSRUHE , FUNDACION GENERAL DE LA UNIVERSIDAD POLITECNICA,
Fachhochschule Aalen, Fachhochschule Braunschweig / Wolfenb|ttel, Fairchild Dornier GmbH, Ford Motor Company, Fugro Geotechnical Services (HK) Ltd,
Hutchinson SA, IABG gmbh, IFMA, INGEMANSSON TECHNOLOGY , INRETS, INRIA, INSA Lyon, INSA Rouen, INSA de Lyon, INSTITUT PASTEUR, IPSE,
IRCAM, ISVR, Imperial College, Institute for Defense Analyses, Institute of Advanced Computing , Instituto Tecnológico de Aeronáutica-CTA, JPL, JRC ISPRA,
KAIST, KOYO SEIKO CO.,LTD., KUL, KUL - BWK, Kaist University, Kanagawa Institute of Technology, Kanazawa Univ., Kurashiki Kako Co., Ltd., Kwangju Institute of
Science and Technology, Kyoto Univ., Kyoto Univ. , L.C.P.C. NANTES, LASPI, LCPC, LEIBNIZ-RECHENZENTRUM , LMT, LNE, LOCKHEED MARTIN MISSILE & SPACE
CO, LRBA, LTH, Lund University, Laboratory for Experimental Audiology, Lawrence Livermore national Laboratory, Lockheed Martin Aeronautics Co, Lucent
Technologies - Bell Laboratories, MDI, MPI für Mathematik, Magnecomp Corp., Meiji University, School of Science and Tec..., Military University of Technology,
Mitsubishi Electric Corporation, Morgan State University, NASA Ames Research Center, NASA Goddard Space Flight Center, NASA Langley Research Center, NASA
Marshall Space Flight Center, NEC, NSWCCD, Nanjing University of Aeronautics & Astronautics, Nanyang Technological University, National Technical University of
Athens, Naval Air Warfare Center Aircraft Division, Nihon Univ., Northwestern University, ONERA, ONT, OPGRAMAMOWANIE NAUKOWO-TECHNICZNE,
ORBITAL, ORBITAL SCIENCES CORPORATION , Oak Ridge National Laboratory, Ono Sokki Co., Ltd., POLISH ACADEMY OF SCIENCES, PRODERA, PSA Peugeot
Citroën, Pennsylvania State University, Pirelli Informatica Sistemi Informativi Ricerca e Sviluppo, Politecnico di Bari, Progettazione e Produzione, Politecnico di
Torino, Politecnico di bari Vie e Trasporti, Politecnico di torino, Pratt & Whitney, Pratt & Whitney Aircraft, RENAULT VI, RENSSELAER POLYTECHNIC
INSTITUTE, RWTH Aachen, Renault Sport, Rolls-Royce Marine Power, Ruhr- Universität Bochum, SANDIA NATIONAL LABORATORIES, SCHENCK PEGASUS
GMBH, SCHOOL OF ELECTRONIC & ELECTRICAL , SECOND SOURCE ELECTRONIC INC, SEGIME, SEP EC, SERAM - ENSAM, SILESIAN TECHNICAL
UNIVERSITY, SKF E&R Center, SMC Corporation, SNAMProgetti, SNCF, SNECMA, SSPA Sweden AB, STANDARD PRODUCTIONS LIMITED, Saarland University,
Sandia National Lab, Sandia National Laboratories, School of Mechanical, Materials, Manufacturing Engineering and Management, Shneider Electric, Siemens A&D,
Siemens PL, Sony Corp., Southern Illinois University, TECHNICAL UNIVERSITY BRATISLAVA , TECHNICAL UNIVERSITY OF ZIELONA GORA , TNIT, TNO TPD -
TU Delft, TNO-Building and Constructions Research, TRW Lucas Aerospace, TU Berlin, TU Braunschweig, TU Braunschweig, IWF, TU Chemnitz , TU Cottbus, TU
Darmstadt, TU München, Takaoka Electric MFG Co, Ltd, Technical University of Szczecin, Technion, Technische Universitaet Clausthal, Technische Universitaet
Hamburg-Harburg, The MathWorks GmbH, The MathWorks, Inc., Tokyo Electric Power Svcs Co Ltd, Toshiba Corp., UDS Ancona, UDS Catania, UDS Ferrara, UDS
Genova, UDS Lecce, UDS Modena e Reggio Emilia, UDS Padova, UDS Pisa, UDS Roma, UDS Roma, Aerospaziale, UDS Trento, UDS Trieste, UDS Trieste, Energetica,
UNIV. CLAUDE BERNARD LYON 1, UNIVERSIDAD DE OVIEDO, UNIVERSITE DE REIMS , UNIVERSITE DE TOURS, UNIVERSITY OF BIRMINGHAM, UNSW,
UPMC Paris 6, UTRC, Univ. of Shiga Prefecture; The, Universidad Politecnica de Cartagena, Universidad del País Vasco, Universidade de Aveiro, Universitaet Bremen,
Universitaet Stuttgart, Universitaet Wuppertal, Universite De Marne La Vallee, Universite Libre de Bruxelles, University of Bristol, University of Central Florida,
University of Liverpool; The, University of Mississippi, University of Missouri Rolla, University of Pretoria, University of Queensland, University of Sheffield,
University of Sheffield; The, University of Southampton, University of Stellenbosch, University of Thessaly, University of Tokyo; The, University of Wales Swansea,
Università di Roma La Sapienza, Universität GH Siegen, Universität Hannover, Universität Kassel, Universität Weimar, Université de Reims Champagne Ardenne,
Université de Valenciennes, Université du Littoral Côte d'Opale, VALEO SYSTEME D'ESSUYAGE, VATTENFALL Data AB, VTT Manufacturing Technology, Vibration
& Sound Solutions Limited, Virginia Tech, Volvo Trucks of North America, Inc., WICHITA STATE UNIVERSITY, Yamaha Corporation, ZVVZ a.s., enea, takumi
kenthiku kouzou kenkyusyo
30
27. Why NASTRAN, ANSYS … and SDT ?
Typical reasons to use your FEM and SDT
• Post-processing
⇒ extract response from full model output, generate state-space
models, visualize, …
• Pre-processing
⇒ generate parts of your job, for example in a shape optimization
process
• Serve as base for your in-house developments
(example Bosch time simulation of squeal)
• Integrate pre- and post-processing in an
optimization process
(example PSA topology optimization for
vibroacoustic response)
• Model reduction capabilities of SDT
36
28. Outline
• Experimental modal analysis
• Test/analysis correlation
• FEM capabilities
• Reduction and superelements
49