What are the advantages and disadvantages of membrane structures.pptx
Development of Unstructured Grid Methods for CFD
1. The Development of Unstructured
Grid Methods For Computational
Aerodynamics
2. Overview
• Structured vs. Unstructured meshing approaches
• Development of an efficient unstructured grid solver
– Discretization
– Multigrid solution
– Parallelization
• Examples of unstructured mesh CFD capabilities
– Large scale high-lift case
– Typical transonic design study
• Areas of current research
– Adaptive mesh refinement
– Moving and overlapping meshes
4. CFD Perspective on Meshing Technology
• Sophisticated Multiblock Structured Grid Techniques for
Complex Geometries
Engine Nacelle Multiblock Grid by commercial software TrueGrid.
5. CFD Perspective on Meshing Technology
• Sophisticated Overlapping Structured Grid Techniques for
Complex Geometries
Overlapping grid system on space shuttle (Slotnick, Kandula and Buning 1994)
7. Characteristics of Both Approaches
• Structured Grids
– Logically rectangular
– Support dimensional splitting algorithms
– Banded matrices
– Blocked or overlapped for complex geometries
• Unstructured grids
– Lists of cell connectivity, graphs (edge,vertices)
– Alternate discretizations/solution strategies
– Sparse Matrices
– Complex Geometries, Adaptive Meshing
– More Efficient Parallelization
8. Discretization
• Governing Equations: Reynolds Averaged Navier-
Stokes Equations
– Conservation of Mass, Momentum and Energy
– Single Equation turbulence model (Spalart-Allmaras)
• Convection-Difusion – Production
• Vertex-Based Discretization
– 2nd order upwind finite-volume scheme
– 6 variables per grid point
– Flow equations fully coupled (5x5)
– Turbulence equation uncoupled
9. Spatial Discretization
• Mixed Element Meshes
– Tetrahedra, Prisms, Pyramids, Hexahedra
• Control Volume Based on Median Duals
– Fluxes based on edges
– Single edge-based data-structure represents all element
types
10. Spatially Discretized Equations
• Integrate to Steady-state
• Explicit:
– Simple, Slow: Local procedure
• Implicit
– Large Memory Requirements
• Matrix Free Implicit:
– Most effective with matrix preconditioner
• Multigrid Methods
11. Multigrid Methods
• High-frequency (local) error rapidly reduced by explicit
methods
• Low-Frequence (global) error converges slowly
• On coarser grid:
– Low-frequency viewed as high frequency
13. Multigrid for Unstructured Meshes
• Generate fine and coarse meshes
• Interpolate between un-nested meshes
• Finest grid: 804,000 points, 4.5M tetrahedra
• Four level Multigrid sequence
14. Cornell University, September 17,2002
Ithaca New York, USA
Geometric Multigrid
• Order of magnitude increase in convergence
• Convergence rate equivalent to structured grid
schemes
• Independent of grid size: O(N)
15. Agglomeration vs. Geometric Multigrid
• Multigrid methods:
– Time step on coarse grids to accelerate solution on fine
grid
• Geometric multigrid
– Coarse grid levels constructed manually
– Cumbersome in production environment
• Agglomeration Multigrid
– Automate coarse level construction
– Algebraic nature: summing fine grid equations
– Graph based algorithm
16. Agglomeration Multigrid
• Agglomeration Multigrid solvers for unstructured meshes
– Coarse level meshes constructed by agglomerating fine grid
cells/equations
17. Agglomeration Multigrid
•Automated Graph-Based Coarsening Algorithm
•Coarse Levels are Graphs
•Coarse Level Operator by Galerkin Projection
•Grid independent convergence rates (order of magnitude improvement)
18. Agglomeration MG for Euler Equations
• Convergence rate similar to geometric MG
• Completely automatic
19. Anisotropy Induced Stiffness
• Convergence rates for RANS (viscous)
problems much slower then inviscid
flows
– Mainly due to grid stretching
– Thin boundary and wake regions
– Mixed element (prism-tet) grids
• Use directional solver to relieve stiffness
– Line solver in anisotropic regions
20. Directional Solver for Navier-Stokes Problems
• Line Solvers for Anisotropic Problems
– Lines Constructed in Mesh using weighted graph algorithm
– Strong Connections Assigned Large Graph Weight
– (Block) Tridiagonal Line Solver similar to structured grids
21. Implementation on Parallel Computers
• Intersected edges resolved by ghost vertices
• Generates communication between original and
ghost vertex
– Handled using MPI and/or OpenMP
– Portable, Distributed and Shared Memory Architectures
– Local reordering within partition for cache-locality
22. Partitioning
• Graph partitioning must minimize number of cut
edges to minimize communication
• Standard graph based partitioners: Metis, Chaco,
Jostle
– Require only weighted graph description of grid
• Edges, vertices and weights taken as unity
– Ideal for edge data-structure
• Line Solver Inherently sequential
– Partition around line using weigted graphs
23. Partitioning
• Contract graph along implicit lines
• Weight edges and vertices
• Partition contracted graph
• Decontract graph
– Guaranteed lines never broken
– Possible small increase in imbalance/cut edges
24. Partitioning Example
• 32-way partition of 30,562 point 2D grid
• Unweighted partition: 2.6% edges cut, 2.7% lines cut
• Weigted partition: 3.2% edges cut, 0% lines cut
25. Sample Calculations and Validation
• Subsonic High-Lift Case
– Geometrically Complex
– Large Case: 25 million points, 1450 processors
– Research environment demonstration case
• Transonic Wing Body
– Smaller grid sizes
– Full matrix of Mach and CL conditions
– Typical of production runs indesign environment
26. Cornell University, September 17,2002
Ithaca New York, USA
NASA Langley Energy Efficient Transport
• Complex geometry
– Wing-body, slat, double slotted flaps, cutouts
• Experimental data from Langley 14x22ft wind tunnel
– Mach = 0.2, Reynolds=1.6 million
– Range of incidences: -4 to 24 degrees
27. VGRID Tetrahedral Mesh
• 3.1 million vertices, 18.2 million tets, 115,489 surface pts
• Normal spacing: 1.35E-06 chords, growth factor=1.3
33. Parallel Scalability
• Good overall Multigrid scalability
– Increased communication due to coarse grid levels
– Single grid solution impractical (>100 times slower)
• 1 hour soution time on 1450 PEs
34. AIAA Drag Prediction Workshop (2001)
• Transonic wing-body configuration
• Typical cases required for design study
– Matrix of mach and CL values
– Grid resolution study
• Follow on with engine effects (2003)
35. Cornell University, September 17,2002
Ithaca New York, USA
Cases Run
• Baseline grid: 1.6 million points
– Full drag Polars for
Mach=0.5,0.6,0.7,0.75,0.76,0.77,0.78,0.8
– Total = 72 cases
• Medium grid: 3 million points
– Full drag polar for each Mach number
– Total = 48 cases
• Fine grid: 13 million points
– Drag polar at mach=0.75
– Total = 7 cases
36. Sample Solution (1.65M Pts)
• Mach=0.75, CL=0.6, Re=3M
• 2.5 hours on 16 Pentium IV 1.7GHz
37. Drag Polar at Mach = 0.75
• Grid resolution study
• Good comparison with experimental data
43. Adaptive Meshing
• Potential for large savings trough optimized
mesh resolution
– Well suited for problems with large range of scales
– Possibility of error estimation / control
– Requires tight CAD coupling (surface pts)
• Mechanics of mesh adaptation
• Refinement criteria and error estimation
44. Mechanics of Adaptive Meshing
• Various well know isotropic mesh methods
– Mesh movement
• Spring analogy
• Linear elasticity
– Local Remeshing
– Delaunay point insertion/Retriangulation
– Edge-face swapping
– Element subdivision
• Mixed elements (non-simplicial)
• Require anisotropic refinement in transition regions
52. Overlapping Unstructured Meshes
• Alternative to Moving Mesh for Large Scale
Relative Geometry Motion
• Multiple Overlapping Meshes treated as single
data-structure
– Dynamic Determination of active/inactive/ghost cells
• Advantages for Parallel Computing
– Obviates dynamic load rebalancing required with mesh
motion techniques
– Intergrid communication must be dynamically
recomputed and rebalanced
• Concept of Rendez-vous grid (Plimpton and Hendrickson)
54. Conclusions
• Unstructured mesh technology enabling technology
for computational aerodynamics
– Complex geometry handling facilitated
– Efficient steady-state solvers
– Highly effective parallelization
• Accurate solutions possible for on-design conditions
– Mostly attached flow
– Grid resolution always an issue
• Adaptive meshing potential not fully exploited
– Refinement criteria require more research
• Future work to include more physics
– Turbulence, transition, unsteady flows, moving meshes
55. where we have placed phi as a subscript for clarity.
Temporal discretization of the transient term
We now direct our attention to the temporal discretization of the transient term. This time, we perform a "dummy" integration between and . At the outse
The choice of the values for and will yield different accuracies. Below are some of the options.
First order upwind or backward Euler scheme
In this scheme, the value for is taken to be the upwind value of the temporal control volume, i.e.
Using this scheme, with a consistent RHS of the discretized equation will yield an implicit set of equations that require an iterative solution procedure.
First order downwind or forward Euler scheme
In this scheme, the value for is taken to be the downwind value of the temporal control volume, i.e.
Using this scheme, with a consistent RHS of the discretized equation will yield an explicit set of equations that do not require an iterative solution procedure.
Second order upwind or Adams-Bashforth scheme
Using a second order transient expansion, we obtain
This will yield an implicit system of second order accuracy with the extra storage of one additional time step.