An important step in orthopaedic pre-operative planning is the generation of accurate volume meshes out of segmented volume image. These meshes are used in patient-specific, bio-mechanical finite element simulations to optimize positioning and design of implants. The development of accurate, multi-material volume meshing methods for medical applications is an active and interdisciplinary field of research. Several methods in the field that were proposed in recent years claim to accurately perform the task, each concept with its advantages and disadvantages. The approaches to the task are diverse. The question is: Which approach is the most suitable one? How do we evaluate the excellence of such methods ? What criteria can be applied to measure the quality of a multi-labelled volume mesh ? And which ones have the most impact on the subsequent simulation, so that stress calculations on the implant are realistic and correct ?
These are the basic research questions that are discussed in this work.
This document discusses meshing algorithms and grids used in computational modeling. It defines the basic elements of a mesh as nodes, edges, faces, and volumes. It describes common 2D and 3D mesh element shapes such as triangles, quadrilaterals, tetrahedrons, and hexahedrons. The document also introduces the Ghs3D algorithm for automatically generating tetrahedral volume meshes from boundary surface meshes.
This document discusses the real-time object detection method YOLO (You Only Look Once). YOLO divides an image into grids and predicts bounding boxes and class probabilities for each grid cell. It sees the full image at once rather than using a sliding window approach. This allows it to detect objects in one pass of the neural network, making it very fast compared to other methods. YOLO is also accurate, achieving a high mean average precision. However, it can struggle to precisely localize small objects and objects that appear in dense groups.
Medium & Light - Refraction simulation X 3D Printing X Laser PenJosephWu59
This research is based on the premise of the 3D printing methodology, incorporates the
discussion of the immaterial (light) and the material (transparent Polylactic Acid). Through
testing various parameters which enable light to contain inside the medium and also control
the refraction of light and the path of the light. This research propose a new method
to revise light trajectories and reflection and also create unique light atmosphere.
This document summarizes a graduation project report on using the NEMO 5 simulation tool to model the energy states and wavefunctions of a gallium arsenide quantum dot nanostructure. Key findings include:
1) NEMO 5 was used to successfully model an 11nm x 11.5nm x 5nm GaAs quantum dot and obtain its energy bands and electron wavefunctions.
2) Computer simulations are important for designing nanostructures before fabrication to reduce costs.
3) Future work includes using NEMO 5 to model more complex structures and exploring other simulation tools through the NanoHub website.
This document discusses common issues that can occur on a PROFIBUS network and how to troubleshoot them using a PROFIBUS tester. It describes 7 typical cases:
1) Reversal of signal quality levels from opposite ends, indicating high line resistance between stations.
2) Gradual decline in signal quality from one station to the next, caused by signal reflections from missing termination.
3) Some stations appearing "missing" from one end but not the other, indicating a break in one or both signal lines.
4) Only one station having low signal quality, meaning its transmitter voltage is too low.
5) Low idle voltage indicating one or both terminations are not powered correctly.
1. The PROFIBUS Tester 4 is a diagnostic tool that can test PROFIBUS networks in stand-alone mode without a PC or with a connected PC using the PROFIBUS Diagnostic Suite software.
2. In stand-alone mode, the tester can perform a "Live Status" test from both ends of a network to check for errors, quality levels, and issues. Further analysis requires connecting the tester to a PC.
3. When connected to a PC, the Diagnostic Suite software provides an overview of any electrical or communication problems, and allows analyzing the signal quality, protocol, frames, and topology to diagnose and locate
The document describes a project to mass produce plastic poker chips using injection molding. Key details:
1) The group designed aluminum molds and finalized injection molding machine settings to produce over 100 regulation poker chips from a polyethylene-polypropylene blend.
2) Testing showed the chips had a hardness of 103 and appropriate tensile strength for their intended use.
3) While cycle times were relatively low, the group recommends incorporating robots and multi-cavity molds in the future to further optimize production quantities.
Conformal multi-material mesh generation from labelled medical volumes (Dec 2...Christian Kehl
This document discusses generating volume meshes from labelled medical volumes for finite element analysis (FEA). It introduces a new approach using the integer medial axis (IMA) transform to generate meshes faster and with fewer elements than previous methods while maintaining precision at boundaries. The IMA approach generates meshes up to 100x faster than other methods for test cases. Local surface triangulation in tangent planes is also proposed to mesh sparse samples accurately without oversampling. Future work will explore using natural neighbors for neighborhood determination in local triangulation.
This document discusses meshing algorithms and grids used in computational modeling. It defines the basic elements of a mesh as nodes, edges, faces, and volumes. It describes common 2D and 3D mesh element shapes such as triangles, quadrilaterals, tetrahedrons, and hexahedrons. The document also introduces the Ghs3D algorithm for automatically generating tetrahedral volume meshes from boundary surface meshes.
This document discusses the real-time object detection method YOLO (You Only Look Once). YOLO divides an image into grids and predicts bounding boxes and class probabilities for each grid cell. It sees the full image at once rather than using a sliding window approach. This allows it to detect objects in one pass of the neural network, making it very fast compared to other methods. YOLO is also accurate, achieving a high mean average precision. However, it can struggle to precisely localize small objects and objects that appear in dense groups.
Medium & Light - Refraction simulation X 3D Printing X Laser PenJosephWu59
This research is based on the premise of the 3D printing methodology, incorporates the
discussion of the immaterial (light) and the material (transparent Polylactic Acid). Through
testing various parameters which enable light to contain inside the medium and also control
the refraction of light and the path of the light. This research propose a new method
to revise light trajectories and reflection and also create unique light atmosphere.
This document summarizes a graduation project report on using the NEMO 5 simulation tool to model the energy states and wavefunctions of a gallium arsenide quantum dot nanostructure. Key findings include:
1) NEMO 5 was used to successfully model an 11nm x 11.5nm x 5nm GaAs quantum dot and obtain its energy bands and electron wavefunctions.
2) Computer simulations are important for designing nanostructures before fabrication to reduce costs.
3) Future work includes using NEMO 5 to model more complex structures and exploring other simulation tools through the NanoHub website.
This document discusses common issues that can occur on a PROFIBUS network and how to troubleshoot them using a PROFIBUS tester. It describes 7 typical cases:
1) Reversal of signal quality levels from opposite ends, indicating high line resistance between stations.
2) Gradual decline in signal quality from one station to the next, caused by signal reflections from missing termination.
3) Some stations appearing "missing" from one end but not the other, indicating a break in one or both signal lines.
4) Only one station having low signal quality, meaning its transmitter voltage is too low.
5) Low idle voltage indicating one or both terminations are not powered correctly.
1. The PROFIBUS Tester 4 is a diagnostic tool that can test PROFIBUS networks in stand-alone mode without a PC or with a connected PC using the PROFIBUS Diagnostic Suite software.
2. In stand-alone mode, the tester can perform a "Live Status" test from both ends of a network to check for errors, quality levels, and issues. Further analysis requires connecting the tester to a PC.
3. When connected to a PC, the Diagnostic Suite software provides an overview of any electrical or communication problems, and allows analyzing the signal quality, protocol, frames, and topology to diagnose and locate
The document describes a project to mass produce plastic poker chips using injection molding. Key details:
1) The group designed aluminum molds and finalized injection molding machine settings to produce over 100 regulation poker chips from a polyethylene-polypropylene blend.
2) Testing showed the chips had a hardness of 103 and appropriate tensile strength for their intended use.
3) While cycle times were relatively low, the group recommends incorporating robots and multi-cavity molds in the future to further optimize production quantities.
Conformal multi-material mesh generation from labelled medical volumes (Dec 2...Christian Kehl
This document discusses generating volume meshes from labelled medical volumes for finite element analysis (FEA). It introduces a new approach using the integer medial axis (IMA) transform to generate meshes faster and with fewer elements than previous methods while maintaining precision at boundaries. The IMA approach generates meshes up to 100x faster than other methods for test cases. Local surface triangulation in tangent planes is also proposed to mesh sparse samples accurately without oversampling. Future work will explore using natural neighbors for neighborhood determination in local triangulation.
The document discusses the development of unstructured grid methods for computational aerodynamics. It covers structured vs unstructured meshing approaches, the development of an efficient unstructured grid solver including discretization, multigrid, and parallelization. Examples are given of large scale high-lift cases and typical transonic design studies solved using these methods. Areas of ongoing research discussed include adaptive mesh refinement and moving/overlapping meshes.
Crocotta Research is developing techniques for simulating a virtual universe made solely of particles. They are focusing on scanning, visualization, physics modeling, and data compression. For scanning, they propose using 3D volumetric textures of particles and accelerated ray-marching using gradient and distance fields. Physics modeling could also benefit from these field representations. Due to the enormous amount of data, compression techniques like adaptive representation and contour analysis will be necessary. Their goal is a 2-3 magnitude speed improvement over brute force methods for simulating interactions between vast numbers of fundamental particles in a virtual universe.
- Tsuyoshi Murata from the Tokyo Institute of Technology discusses using deep learning approaches for complex networks and graph neural networks.
- He summarizes recent work on network embedding, including a paper on learning community structure with variational autoencoders and another on embedding multiplex networks.
- Murata then discusses applications of graph neural networks, challenges in training deep GCNs, the representational power and limitations of GNNs, and open problems in the field like handling shallow structures, dynamic graphs, and scalability issues.
Seminar in IPP Max-Planck. Only questions phase. 16-10-2015Vicent_Net
The presentation gives an overview of the 3D-printed UST_2 stellarator. The fabrication methods used, the results and the current status are summarised.
This document discusses different modeling approaches for finite element analysis of masonry structures. It compares the springs modeling approach and expanded units modeling approach by applying them to numerically model and analyze the behavior of four experimental masonry walls under various static and pseudo-dynamic loading conditions. Both approaches are able to predict the load capacities of the walls with reasonable accuracy compared to experimental data, but the expanded units approach is found to be more versatile and accurate. The document also outlines future research directions around improving the accuracy and capabilities of the modeling approaches.
Multi-class Classification on Riemannian Manifolds for Video SurveillanceDiego Tosato
In video surveillance, classification of visual data can be very hard due to the scarce resolution and the noise characterizing the sensors data. In this paper, we propose a novel feature, the ARray of COvariances (ARCO), and a multi-class classification framework operating on Riemannian manifolds. ARCO is composed by a structure of covariance matrices of image features, able to extract information from data at prohibitive low resolutions. The proposed classification framework consists in instantiating a new multi-class boosting method, working on the manifoldof symmetric positive definite d×d (covariance) matrices. As practical applications, we consider different surveillance tasks, such as head pose classification and pedestrian detection, providing novel state-of-the-art performances on standard datasets.
The document discusses current and continuing issues in computational fluid dynamics (CFD). It identifies the main issues as accuracy, CAD geometry, users, application areas, computational resources, and new challenges. For each issue, it provides more details on specific challenges. For example, for accuracy it discusses validation, turbulence modeling, convergence, and mesh quality. It also provides examples to illustrate some of the challenges and notes activities by NAFEMS to address these issues.
[RSS2023] Local Object Crop Collision Network for Efficient SimulationDongwonSon1
The document proposes a Local Object Crop Collision Network (LOCC) to efficiently simulate contact between non-convex objects in GPU-based simulators. LOCC uses a neural network to detect collisions by encoding only local crops where collisions occur, leveraging the observation that locally, shapes are similar regardless of global differences. This allows for constant online computation time compared to traditional convex decomposition methods. The LOCC model combined with the Brax physics engine (LOCC-Brax) was shown to be 10x faster than Isaac Gym for simulating 30,000 environments and achieved over 96% accuracy on various test objects, demonstrating improved efficiency and generalization over traditional methods.
Using Deep Learning to Derive 3D Cities from Satellite ImageryAstraea, Inc.
Detection and reconstruction of 3D buildings in urban areas has been a hot topic of research due to its many applications, including 3D population density studies, emergency planning, and building value estimation. Standard approaches to extract building footprint and measure building height rely on either aerial or space borne point cloud data, which in many areas is unavailable. In contrast, high resolution satellite imagery has become more readily available in recent years, and could provide enough information to estimate a building’s height. Recent successes of deep learning on semantic segmentation have shown that convolutional neural networks can be effective tools at extracting 2D building footprints. Using a digital surface model derived using FOSS and LiDAR data as ground truth, this study goes a step further by employing state of the art deep learning architectures such as U-net to infer both building footprints and estimated building heights in one pass from a single satellite image. This application of open deep learning frameworks can bring the benefits of 3D cities to a larger portion of the world.
Experimental and analytical techniques have limitations in fluid mechanics applications. Computational fluid dynamics (CFD) uses numerical methods like finite difference, finite element, and boundary element to solve the governing equations at discrete points within a domain. It allows for complex simulations that would be difficult or expensive with physical experiments. CFD involves discretizing the domain with grids, solving the equations numerically, and analyzing the results to obtain approximate solutions for fluid flow problems.
This document discusses experimental and analytical techniques in fluid mechanics, as well as computational fluid dynamics (CFD). It notes that full-scale experiments can be difficult or expensive, so analytical models using differential equations are commonly used, though assumptions limit their applicability. CFD solves these equations numerically on a grid using techniques like finite difference, finite element, and boundary element methods. It has applications in automotive and biomedical fields.
The document describes the Marching Cubes algorithm, which was developed in 1987 to construct 3D models from medical imaging data like CT scans. It works by dividing the volume into cubes and using the pixel values at the cube vertices to determine triangles that approximate the surface. There are 256 possible cases but they can be reduced to 14 basic patterns. The algorithm calculates surface normals to improve image quality and has been used to generate 3D models from various medical imaging modalities.
The document summarizes Daniel Paterson's MASc thesis defense on modeling the rapid dewatering of cellulose fibre suspensions. It outlines the project objectives to extend existing dewatering models, determine material parameters through experiments, model dewatering trends at various rates, and validate the models. The background discusses past one-dimensional modeling approaches and limitations in capturing cellulose fibre dewatering. The project aims to develop an extended model that accounts for the porous structure of cellulose fibres and associated flow-induced compaction.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
*Discretization of a Structure, 1D, 2D and 3D element Meshing, * Element selection criteria, *Refining Mesh,
*Effect of mesh density in critical region,
*Use of Symmetry.
*Element Quality Criterion:-Jacobian, Aspect ratio, Warpage, Minimum and Maximum angles, Average element size, Minimum Length, skewness, Tetra Collapse etc., *Higher Order Element vs Mesh Refinement,
*Geometry Associate Mesh, *Mesh quality,
*Bolted and welded joints representation,
*Mesh independent test.
1. The document discusses meshing and grid generation for computational fluid dynamics simulations. It describes the different types of grids, elements, and factors to consider for grid quality such as skewness, smoothness, and aspect ratio.
2. The key steps of the mesh generation process are outlined, including creating the geometry, generating boundary and volume meshes, and refining the mesh.
3. Guidelines for grid design are provided regarding resolving pertinent flow features, cell aspect ratios, and making the change in cell size gradual.
This document provides an overview of computational fluid dynamics (CFD) and the CFD process. CFD involves using computers to simulate fluid flow problems by solving partial differential equations describing conservation laws. The CFD process consists of three main steps: pre-processing to define the problem and mesh; solving using a numerical method; and post-processing to analyze results. Key aspects that must be planned include the problem objectives, domain representation, physical models, and verification of results. Sources of error and uncertainty must also be considered.
A Multiscale Simulation Approach for Diesel Particulate Filter Design Based o...Ries Bouwman
This document discusses a multiscale simulation approach for diesel particulate filter design using OpenFOAM and DexaSIM. It describes reconstructing filter material microstructures from CT scans and simulating soot deposition, porosity, and permeability at the microscopic scale. These microscopic properties are then used in macroscopic simulations of the entire exhaust system to determine overall filter performance. The approach aims to provide a detailed link between microscopic material changes and resulting macroscopic filter behavior to improve design through simulations rather than experiments.
Toward In-situ Realization of Ergonomic Hand/Arm Orthosis A Pilot Study on th...Ardalan Amiri
Unleashing the joint power of virtual prototyping and human modelling to address ergonomics of clinical limb and body supportive products.
3D scanning and reverse engineering, CAD techniques and inspections, topographical and topological optimization, material selection for proper additive manufacturing, CAE assessments, etc are all the basic constituents of this project.
In compare to the available methods to reduce the cost and time of the clinical orthosis realization while more flexibly customizing it in the best interest of the doctor and patient a semi-automatic system has been schematized for the purpose. The system operations have been recognized in details and performed by authors manually to investigate the possible challenges in case of a arm/hand orthosis with focus on wrist injuries. Addressing such challenges and characterizing the elemental tools required by such system were done using multiple commercial software packages in Reverse Engineering, CAD and CAE fields. The orthosis was optimized in an ergonomic manner favoring the fast prototyping as well as medical concerns. Many valuable point were drawn at the end as conclusion of this pilot study enhancing the concept in structural optimization and bio-mechanical considerations. The similar researches, up to the end of this report, lack purposeful topology design, envisioning an automated system and critical medical cases manifesting in bio-mechanical scenarios for product integrity assessments. All the previous points can be find in this report.
The document discusses the development of unstructured grid methods for computational aerodynamics. It covers structured vs unstructured meshing approaches, the development of an efficient unstructured grid solver including discretization, multigrid, and parallelization. Examples are given of large scale high-lift cases and typical transonic design studies solved using these methods. Areas of ongoing research discussed include adaptive mesh refinement and moving/overlapping meshes.
Crocotta Research is developing techniques for simulating a virtual universe made solely of particles. They are focusing on scanning, visualization, physics modeling, and data compression. For scanning, they propose using 3D volumetric textures of particles and accelerated ray-marching using gradient and distance fields. Physics modeling could also benefit from these field representations. Due to the enormous amount of data, compression techniques like adaptive representation and contour analysis will be necessary. Their goal is a 2-3 magnitude speed improvement over brute force methods for simulating interactions between vast numbers of fundamental particles in a virtual universe.
- Tsuyoshi Murata from the Tokyo Institute of Technology discusses using deep learning approaches for complex networks and graph neural networks.
- He summarizes recent work on network embedding, including a paper on learning community structure with variational autoencoders and another on embedding multiplex networks.
- Murata then discusses applications of graph neural networks, challenges in training deep GCNs, the representational power and limitations of GNNs, and open problems in the field like handling shallow structures, dynamic graphs, and scalability issues.
Seminar in IPP Max-Planck. Only questions phase. 16-10-2015Vicent_Net
The presentation gives an overview of the 3D-printed UST_2 stellarator. The fabrication methods used, the results and the current status are summarised.
This document discusses different modeling approaches for finite element analysis of masonry structures. It compares the springs modeling approach and expanded units modeling approach by applying them to numerically model and analyze the behavior of four experimental masonry walls under various static and pseudo-dynamic loading conditions. Both approaches are able to predict the load capacities of the walls with reasonable accuracy compared to experimental data, but the expanded units approach is found to be more versatile and accurate. The document also outlines future research directions around improving the accuracy and capabilities of the modeling approaches.
Multi-class Classification on Riemannian Manifolds for Video SurveillanceDiego Tosato
In video surveillance, classification of visual data can be very hard due to the scarce resolution and the noise characterizing the sensors data. In this paper, we propose a novel feature, the ARray of COvariances (ARCO), and a multi-class classification framework operating on Riemannian manifolds. ARCO is composed by a structure of covariance matrices of image features, able to extract information from data at prohibitive low resolutions. The proposed classification framework consists in instantiating a new multi-class boosting method, working on the manifoldof symmetric positive definite d×d (covariance) matrices. As practical applications, we consider different surveillance tasks, such as head pose classification and pedestrian detection, providing novel state-of-the-art performances on standard datasets.
The document discusses current and continuing issues in computational fluid dynamics (CFD). It identifies the main issues as accuracy, CAD geometry, users, application areas, computational resources, and new challenges. For each issue, it provides more details on specific challenges. For example, for accuracy it discusses validation, turbulence modeling, convergence, and mesh quality. It also provides examples to illustrate some of the challenges and notes activities by NAFEMS to address these issues.
[RSS2023] Local Object Crop Collision Network for Efficient SimulationDongwonSon1
The document proposes a Local Object Crop Collision Network (LOCC) to efficiently simulate contact between non-convex objects in GPU-based simulators. LOCC uses a neural network to detect collisions by encoding only local crops where collisions occur, leveraging the observation that locally, shapes are similar regardless of global differences. This allows for constant online computation time compared to traditional convex decomposition methods. The LOCC model combined with the Brax physics engine (LOCC-Brax) was shown to be 10x faster than Isaac Gym for simulating 30,000 environments and achieved over 96% accuracy on various test objects, demonstrating improved efficiency and generalization over traditional methods.
Using Deep Learning to Derive 3D Cities from Satellite ImageryAstraea, Inc.
Detection and reconstruction of 3D buildings in urban areas has been a hot topic of research due to its many applications, including 3D population density studies, emergency planning, and building value estimation. Standard approaches to extract building footprint and measure building height rely on either aerial or space borne point cloud data, which in many areas is unavailable. In contrast, high resolution satellite imagery has become more readily available in recent years, and could provide enough information to estimate a building’s height. Recent successes of deep learning on semantic segmentation have shown that convolutional neural networks can be effective tools at extracting 2D building footprints. Using a digital surface model derived using FOSS and LiDAR data as ground truth, this study goes a step further by employing state of the art deep learning architectures such as U-net to infer both building footprints and estimated building heights in one pass from a single satellite image. This application of open deep learning frameworks can bring the benefits of 3D cities to a larger portion of the world.
Experimental and analytical techniques have limitations in fluid mechanics applications. Computational fluid dynamics (CFD) uses numerical methods like finite difference, finite element, and boundary element to solve the governing equations at discrete points within a domain. It allows for complex simulations that would be difficult or expensive with physical experiments. CFD involves discretizing the domain with grids, solving the equations numerically, and analyzing the results to obtain approximate solutions for fluid flow problems.
This document discusses experimental and analytical techniques in fluid mechanics, as well as computational fluid dynamics (CFD). It notes that full-scale experiments can be difficult or expensive, so analytical models using differential equations are commonly used, though assumptions limit their applicability. CFD solves these equations numerically on a grid using techniques like finite difference, finite element, and boundary element methods. It has applications in automotive and biomedical fields.
The document describes the Marching Cubes algorithm, which was developed in 1987 to construct 3D models from medical imaging data like CT scans. It works by dividing the volume into cubes and using the pixel values at the cube vertices to determine triangles that approximate the surface. There are 256 possible cases but they can be reduced to 14 basic patterns. The algorithm calculates surface normals to improve image quality and has been used to generate 3D models from various medical imaging modalities.
The document summarizes Daniel Paterson's MASc thesis defense on modeling the rapid dewatering of cellulose fibre suspensions. It outlines the project objectives to extend existing dewatering models, determine material parameters through experiments, model dewatering trends at various rates, and validate the models. The background discusses past one-dimensional modeling approaches and limitations in capturing cellulose fibre dewatering. The project aims to develop an extended model that accounts for the porous structure of cellulose fibres and associated flow-induced compaction.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
*Discretization of a Structure, 1D, 2D and 3D element Meshing, * Element selection criteria, *Refining Mesh,
*Effect of mesh density in critical region,
*Use of Symmetry.
*Element Quality Criterion:-Jacobian, Aspect ratio, Warpage, Minimum and Maximum angles, Average element size, Minimum Length, skewness, Tetra Collapse etc., *Higher Order Element vs Mesh Refinement,
*Geometry Associate Mesh, *Mesh quality,
*Bolted and welded joints representation,
*Mesh independent test.
1. The document discusses meshing and grid generation for computational fluid dynamics simulations. It describes the different types of grids, elements, and factors to consider for grid quality such as skewness, smoothness, and aspect ratio.
2. The key steps of the mesh generation process are outlined, including creating the geometry, generating boundary and volume meshes, and refining the mesh.
3. Guidelines for grid design are provided regarding resolving pertinent flow features, cell aspect ratios, and making the change in cell size gradual.
This document provides an overview of computational fluid dynamics (CFD) and the CFD process. CFD involves using computers to simulate fluid flow problems by solving partial differential equations describing conservation laws. The CFD process consists of three main steps: pre-processing to define the problem and mesh; solving using a numerical method; and post-processing to analyze results. Key aspects that must be planned include the problem objectives, domain representation, physical models, and verification of results. Sources of error and uncertainty must also be considered.
A Multiscale Simulation Approach for Diesel Particulate Filter Design Based o...Ries Bouwman
This document discusses a multiscale simulation approach for diesel particulate filter design using OpenFOAM and DexaSIM. It describes reconstructing filter material microstructures from CT scans and simulating soot deposition, porosity, and permeability at the microscopic scale. These microscopic properties are then used in macroscopic simulations of the entire exhaust system to determine overall filter performance. The approach aims to provide a detailed link between microscopic material changes and resulting macroscopic filter behavior to improve design through simulations rather than experiments.
Toward In-situ Realization of Ergonomic Hand/Arm Orthosis A Pilot Study on th...Ardalan Amiri
Unleashing the joint power of virtual prototyping and human modelling to address ergonomics of clinical limb and body supportive products.
3D scanning and reverse engineering, CAD techniques and inspections, topographical and topological optimization, material selection for proper additive manufacturing, CAE assessments, etc are all the basic constituents of this project.
In compare to the available methods to reduce the cost and time of the clinical orthosis realization while more flexibly customizing it in the best interest of the doctor and patient a semi-automatic system has been schematized for the purpose. The system operations have been recognized in details and performed by authors manually to investigate the possible challenges in case of a arm/hand orthosis with focus on wrist injuries. Addressing such challenges and characterizing the elemental tools required by such system were done using multiple commercial software packages in Reverse Engineering, CAD and CAE fields. The orthosis was optimized in an ergonomic manner favoring the fast prototyping as well as medical concerns. Many valuable point were drawn at the end as conclusion of this pilot study enhancing the concept in structural optimization and bio-mechanical considerations. The similar researches, up to the end of this report, lack purposeful topology design, envisioning an automated system and critical medical cases manifesting in bio-mechanical scenarios for product integrity assessments. All the previous points can be find in this report.
Similar to Master Thesis: Conformal multi-material mesh generation from labelled medical volumes (20)
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Towards Distributed, Semi-Automatic Content-Based Visual Information Retrieva...Christian Kehl
Talk on big media archive visual indexing using Convolutional Neural Networks on different High-Performance Computing Platforms, developing new parametrization schemes. Talk and Poster were presented at International Supercomputing Conference 2015 (Frankfurt a. Main / Germany)
Distributed Rendering and Collaborative User Navigation- and Scene Manipulati...Christian Kehl
This document discusses distributed rendering and collaborative user navigation and scene manipulation in virtual environments. It presents an approach to extend an existing virtual reality framework to allow remote, distributed rendering on various display devices as well as remote, collaborative navigation and editing of massive 3D datasets. Technical results show the framework can successfully synchronize distributed rendering and enable collaborative navigation and modification of 3D scenes in real-time. This allows multiple remote users to interactively discuss and communicate changes to virtual environments, with potential applications in flood protection planning and other domains. Future work aims to improve the techniques for touch and mobile devices and simplify data for remote clients.
Interactive Simulation and Visualization of Large-Scale Flooding Scenarios (J...Christian Kehl
This document discusses interactive simulation and visualization of large-scale flooding scenarios. It covers topics such as real-time 3D visualization of massive LiDAR point clouds, interactive adaptive simulation of flooding scenarios, and multi-scenario comparative simulation visualization. The goal is to develop techniques to simulate and visualize different flooding scenarios and uncertainties to help decision makers and the general public understand the risks and impacts of potential flooding events.
Efficient Navigation in Temporal, Multi-Dimensional Point Sets (April 2013)Christian Kehl
1. The document discusses algorithms and techniques for efficient navigation of temporal, multi-dimensional point set data.
2. A goal is to develop visualization algorithms that support user navigation through time-series data, including real-time rendering, user-centered browsing, and navigation via visual summaries.
3. The research will focus on scalable rendering and visualization of time-dependent point sets, efficient browsing of time-dependent datasets, and navigation using visual summaries to guide users through important events.
Smooth, Interactive Rendering and On-line Modification of Large-Scale, Geospa...Christian Kehl
This document discusses techniques for interactively rendering and modifying large-scale geospatial LiDAR point set data. It proposes a rendering-on-budget approach that combines importance-based streaming with a PID controller to balance load. This allows for smooth rendering while modifying streamed data online without quality loss. Proof of concepts demonstrate modifying attributes like color via polygons or textures, and displacing vertices using displacement maps. Performance is improved over traditional level-of-detail approaches.
WP 4 – Interactive simulation and 3D visualization for water policy developme...Christian Kehl
This document describes methods for interactive 3D simulation and visualization of water policies. It discusses using large high-resolution topographic data integrated with flood simulations to understand flood protection policies. Methods include level-of-detail rendering to efficiently display large datasets, geospatial data integration using KML and triangulated meshes, and continuous temporal interpolation to animate realistic wave simulations in real-time. Use cases apply these methods to study historic floods in locations like Wieringermeer and visualize the 1953 North Sea flood for policy discussions. The results allow interactive exploration of urban flood studies with smooth level-of-detail transitions and animated water levels.
This document provides an overview of different types of LiDAR acquisition methods. Aerial LiDAR is used to capture large areas and generates 2.5D data by scanning from aircraft. Terrestrial LiDAR captures smaller areas in full 3D using static or mobile ground-based units. Bathymetric LiDAR maps shallow underwater areas using dual lasers. Atmospheric LiDAR surveys air properties by transmitting laser pulses and analyzing backscatter. Common to all is using a laser transmitter and detector to measure discrete points or full waveforms, with variations depending on the objective and environment.
Depth image recognition using isomorphic graph theoryChristian Kehl
This document discusses using graph theory and depth images to recognize objects. It proposes constructing a mesh of the world from depth camera data, transforming depth images into normal maps, and representing each region or plane as a different color "billboard". Objects in images can be matched by growing regions into nodes connected by edges between neighboring regions. Graphs of images could then be compared to find the largest isomorphic subgraphs and determine if the images match.
Graph theory - Traveling Salesman and Chinese PostmanChristian Kehl
Traveling Salesman and Chinese Postman problems
1. Problem Description and Complexity
2. Theoretical Approach
3. Practical Approaches and Possible Solutions
4. Examples
This document discusses parallel computing on GPUs using OpenCL. It provides an overview of basics of parallel computing, a brief history of SIMD and MIMD architectures, and details of OpenCL. It then describes a case study of using OpenCL and OpenMP to perform a Monte Carlo study of a spring-mass system. The study models the system, uses the Euler method for numerical integration, develops SIMD approaches for GPUs, implements OpenMP, analyzes results and speedup, and provides conclusions on parallelization.
Point clouds are sets of unordered points without connections. They can be generated from 3D scans and used for medical or industrial applications. Point clouds lack properties like textures and normals, so lighting cannot be directly applied. They must be converted into meshes or polygon networks for solid modeling. This can be done through algorithms like triangulation, two-peasant graphs, or marching cubes. Constructive solid geometry uses boolean operations on basic geometric primitives to combine them into complex 3D models. It is commonly used in CAD software for engineering design.
Vortrag zur Bildbe- und verarbeitungauf der Grafikkarte mit Hilfe von OpenCL. Hintergrund ist die Bildvorverarbeitung und Verbesserung zur bei Gesichtserkennungsverfahren zur Erhöhung der Wiedererkennungsrate.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
3. 4Challenge the future
• Input: segmented volume image
• Expected output: labelled volume mesh
• Restrictions to final mesh
• Algorithm: robust and fast
Introduction
4. 5Challenge the future
Introduction
• Challenge: bio-medial structures
• Cavities and holes
• Non-manifold structures
• Junctions of multiple structures Images
Meshes
5. 6Challenge the future
Research Questions
Which meshing concept is most suitable for extracting
accurate, multi-material volume meshes from medical volume
images ?
How can the performance of a chosen concept be evaluate ?
What criteria can be applied to measure the quality of a multi-
labelled volume mesh ?
11. 14Challenge the future
Our starting point
• BioMesh3D: dynamic particle system
Advantages:
• Feature adaptive mesh
• optimal vertex distribution
• very good mesh quality
• Meshing of multi-genus,
non-manifold structures
12. 16Challenge the future
Issues
• Multi-labelled junctions
• Structural conformity
• Minimal tetrahedral element count
• Medial Axis Transform runtime
• High MAT accuracy required
13. 17Challenge the future
Issues
• Mathematical Explanation of Meshing Problem
• Delaunay Triangulation
• Sampling Requirement
• Local feature size
• Medial Axis Transform
• Discrete image space
14. 18Challenge the future
• Meshing algorithms for sparse particles:
• Delaunay & Alpha Shapes
Alternative Meshing Algorithms
Fischer2004
ETH Zürich / Applied Geometry Group
15. 19Challenge the future
• Meshing algorithms for sparse particles:
• Voronoi Diagrams
Alternative Meshing Algorithms
Amenta2001
16. 20Challenge the future
• Meshing algorithms for sparse particles:
• Implicit Surfaces
Alternative Meshing Algorithms
17. 21Challenge the future
• Meshing algorithms for sparse particles:
• Local Triangulation
• Spherical Parametrization
Alternative Meshing Algorithms
Brink2005
18. 22Challenge the future
• Meshing algorithms for sparse particles:
• Local Triangulation
• Poisson Surface Reconstruction
Alternative Meshing Algorithms
Kazhdan2006
19. 23Challenge the future
• Meshing algorithms for sparse particles:
• Local Triangulation
• 2D generic Meshing Algorithms (Graham, TwoPeasant)
Alternative Meshing Algorithms
created with FastSurfaceReconstruction – PCL / Marton2009
20. 24Challenge the future
• Meshing algorithms for sparse particles:
• Local Triangulation
• Local Delaunay Triangulation
Alternative Meshing Algorithms
21. 25Challenge the future
Contribution
• Fast medial axis transform for faster execution
• Local Meshing for multi-labelled volumes
• Using known isosurface properties:
normals and principal curvatures
• Enabling sparser particle sampling, or
• More detail with equal sampling
23. 27Challenge the future
Our Approach: Integer Medial Axis
• Motivation: much faster computation
• BioMesh3D uses:
Center of inscribed spheres
• We use:
Shortest path in feature transform [Hesselink and Roerdink
2008]
24. 29Challenge the future
Our Approach: Surface Meshing
• Parameter Extraction from multi-labelled iso-surfaces
(Normals, Curvatures, Feature Size)
• Particles + parameters local reconstruction
25. 30Challenge the future
Our Approach: Surface Meshing
• Idea: replace 3D Delaunay tetrahedralization with local 2D
Delaunay triangulation
• 3D sampling contraints 2D surface constraints
• E.g.: project to local tangent plane
• Locally watertight
• Issue: We need guaranteed closed watertight 3D surface
26. 32Challenge the future
Visualization
• Toolbox: VTK
• Visualization of multiple information and multi-labelled
objects
• Separate control for:
• All labelled objects of
one information group
• All information of one
Label
• Volume Mesh clipping
• Edge highlighting for visual
element size estimation
28. 34Challenge the future
Conclusion
• Initial analysis revelead
• MAT bottleneck
• Sampling limitation dependencies
• Alternative approaches
• Integer Medial axis speed-up
• Local Delaunay Triangulation less restrictive reconstruction
• Prototype implementation with:
• Multi-material surface mesh visualization
• Multi-material volume mesh visualization
• quality measurement
29. 35Challenge the future
Future Work
• Watertight local reconstruction algorithm
• Modified sampling criterion
• Mesh quality criteria consensus needed
30. 36Challenge the future
Propositions
1. Discrete Medial Axis Transform schemes are more prone to
errors in comparison to continuous schemes
2. feature-preserving surface reconstruction by 3D Delaunay
Tetrahedralization is not possible
3. the spatial 1-Ring neighbourhood of a point cloud is
dependent on the meshing scheme
4. Local Delaunay Triangulation enables shape- and feature-
conform surface reconstruction of loose samples
5. scientific work requires a methodological perspective as
well as a systematic perspective
Editor's Notes
Today, I will present parts of my master thesis. The related computer graphics field of my thesis topic is computational geometry. Therefore today’s title of my talk is ...
Introduction:
What are we trying to do? (very) roughly
Initial Conditions / pre-requirements
General guideline
Final goal
Related Work:
Recent development in that area
Different approaches
Basis of my work (with pipeline sketch)
Criticism & Contribution
general issues of dynamic particle systems
day-to-day trouble
consequently arising our contribution to the field
Our Approach
[/comment for later]
Integer Medial Axis
Normal Extraction from Multi-Label Iso-Surface
Fast Surface Reconstruktion
Filtering
On-going work
Quality Measurments
Appropriate Volume Visualization (Distance Fields)
Runtime + Quality Comparison
General idea/guideline:
Start: CT scan data of the patient => Segmentation into separate regions => Conversion into a separate surface descption => Generation of volume mesh based on particular geometric primitives => fusion of separate models into one model of separate labels with interfaces
Restrictions to final mesh:
topological conformity with focus on label interfaces
reasonable amount of geometric elements
Adaptive to features
preferably features are preserved
Advantages Classical Method:
Fast
Easy Tessellation (element number control)
Drawbacks
-not topology conformant
-sometimes bad quality
-inordinary amount of elements (bad for FEA)
-stepping artifacts with Marching Cubes
-no feature adaptive meshing
Advantages Classical Method:
Fast
Easy Tessellation (element number control)
Drawbacks
-not topology conformant
-sometimes bad quality
-inordinary amount of elements (bad for FEA)
-stepping artifacts with Marching Cubes
-no feature adaptive meshing
With example pictures, refer to author and year (e.g. Dey2011)
Delaunay:
Feature preserving non-linear filter methods
Interface extraction as cells
Interfaces cell protection with protecting balls during Tetrahedralization
Sliver removal
Advantages DT:
Good runtime behavior
Conformant tetrahedra and good quality
Feature adaptive meshing
Watertight meshes
Disadvantages:
Topology correctness for vertices and edges, interface planar boundaries not guaranteed to be correct (protecting balls doesn’t work for planes)
High dependency on pre-processing
Long time no full solution existent (now in Amira)
Point-based Registration Framework:
Extracts fine grid-resolution volume mesh and Feature points (based on image properties)
refine grid-volume to feature adaptive, precise volume mesh representation
improved constrained Delaunay Tetrahedralization refinement
improvements on geometric quality guarantees for interface surfaces by blacklisting [Kahnt2011]
Delaunay refinement on piecewiese-smooth complexes [Dey2011]
interface-conform, multi-tissue constrained Delaunay refinement with focus on ideal Dihedral Angles [Foteinos2011]
extension of Meyer’s particle-based meshing approach on single-material CAD-surfaces with improved particle system and local triangulation scheme
-optimal shape via LDA a-shape and conformal a-shape [Presley2011]
introduction Local Density-Adaptive a-shape [Maillot2010]
realization of 3D LDA a-shapes [Chevallier2011]
modelling ambiguous 3D structures with compoundly weighted a-shapes [Chazals2011]
notation of extremal surfaces [Lui2010]
introduction of multi-level partitions of unity implicit surfaces [d’Otreppe2011]
smooth surface description by modified Marching Cubes interpolator [Manson2010]
Pictures
Dynamic Particle Systems:
good Delaunay Tetrahedralization requires good vertices sample criteria Amenta et. al
Idea: distribute particles along the material’s surface to generate “optimal” Delaunay vertices
Interface adaptive and feature adaptive sampling
Delaunay Tetrahedralization forms volume mesh; surfaces at multi-material interfaces form boundary mesh
Advantages:
watertight mesh
high-quality mesh elements
Copes with interfaces vertices, edges and faces
Tessellation adapted to interfaces and junctions
Improvements:
conform meshing for more than 2-material junctions [Quadrants picture]
capture sharp features in sampling
correct boundary- and volume description
robust implementation
Citicism:
Tightening filters volume
Non-feature preserving (feature reducing) operation
[CLICK FOR PICTURE]
Low curvature -> high radius of curvature and v.v.
High radius of curvature goes along with smoother surfaces which are coarser tessellated
Therefore: small number of elements gradually reduces features
[CLICK FOR PICTURE]
-Limiting cuvature alters model and topology
Hard features (corners, edges) are eroded
Curvature limitation -> lost features -> features are of topological significance in non-manifold structures
[CLICK FOR PICTURE]
Feature size detected by medial axis depending on radius of curvature
Medial axis well defined in continious space; correct medial axis hard to compute in discrete space (i.e. Volume images)
Therefore correct feature size extraction (with required level of detail) from volume images is not guaranteed, but required by the particle system
Unprecise feature size leads to mathematical convergence problems during the iterative refinement of the particle system
Boundary extraction can be done by other methods than with 3D Delaunay Triangulation, which requires additional surface information
Surface information present by unused
Because meshing output significantly dependent on the curvature parameter, quality control hard to establish
Local Feature Size = distance from boundary point to the medial axis
Distance medial axis – boundary at corners = radius of curvature = local feature size
Sharp corners: feature size = 0
Sampling criterium for Delaunay Triangulation: d(p1,p2) = e* lfs; e -> practical boundary 0.6
Lfs @ corners = 0 => d(p1,p2)=0 =>number of particles at sharp corners = ∞
[CLICK FOR PICTURE]
Detection of Medial Axis in continious space defined by partial differential equations
(Solving Eikonal Equation of the distance field’s gradient and detecting ridges with evolving front; MA = outward normal flux (negative divergence) of the evolving front)
Not directly convertable to discrete space
Discrete Approximation create lots of branches; Distance/Feature Size not precise enough
For particle system continious space-similar medial axis necessary
That’s the reason for Meyer’s new MA-approach based on isosurface description. But incredibly slow and computationally unstable
-Introduction of new Meshing Algorithm will solve most problems
Delaunay and Alpha shapes inherit sampling criterium limitation
Power Crust Algorithm based on Voronoi diagrams inherit the same sampling criterium
Implicit Surfaces not restricted by sampling criterium, but implicit surface needs to be converted to triangle mesh including again a resampling
Local Meshing Schemes include projection of neighbour points into local tangent plane; variety of local meshing algorithms can be used
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
-new Medial axis approach to improve runtime improvement
-modifiy particle system to capture sharp features and not be limited to epsilon-criteria
-choice of a suitable Local Triangulation Algorithm
Abbreviation: CMS = center of maximal spheres
- Base: smoothed marching cubes model; option to take Surfaces Nets [de Bruin et al. 2000]
- averaging of parameters from particle attributes
Compute principal curvature and consitently oriented normals
Nearest surface points to particles are extracted
parameter averaging and vector normalization
Issue: Neighbourhood computation based on sample criterium [Gopi2000] replaced with local feature size
TBN matrix at each point is computed
Neighbourhood projected on tangent plane
2D Delaunay on tangent plane (watertight)
For closed surface, projection position of connected neighbourhoods needs to be similar (otherwise wrong topology)
-> sampling criterium ensures small gradual changes by high sampling of high-curvature regions
Consistent, correct surface normals are required
Curvature limitation lowered, but not erased
Problem: Interplay Features – Feature Size – Curvature
base problem here: Particle System
[future idea: curvature-independent sampling criteria]
Coupling Tessellation and Curvature still exists
[ideas to overcome this:
use precomputed curvatures instead of everywhere the local feature size
subsequent triangle reduction (fusion) operation]
lowered curvature limitation and meshing limitation can improve topological correctness
because all things based on the local feature size, dependency on the medial axis is inevitable
Local Delaunay Triangulation much more robust scheme as 3D Delaunay; is guaranteed to converge
Isosurface parameters are now effectively used in new meshing schemes to imrpove stability
quality control stays hard