This document presents an introduction to structural optimization. It discusses how structural optimization aims to design mechanical components to best withstand applied loads by minimizing weight or maximizing stiffness, while satisfying constraints. It outlines the typical formulation of a structural optimization problem, including defining an objective function and design variables. Two main topology optimization methods are described: minimum compliance design and bi-directional evolutionary structural optimization (BESO). The document also introduces the software Inspire, which will be used to conduct a practical structural optimization example of a mechanical component for the heavy industry sector in the subsequent chapters.
GMT started manufacturing lapping & polishing machine in 1971. It was built for in-house use and until then hand lapping process was used for finishing surface plates. Critical parts for our workholding devices required a finishing and buffing machine. There was demand in the market for valve lapping and lapping plates. Since GMT had a captive ferrous foundry, castings for lap plate and lapping machine were easier to source.
Since then GMT has become one of India's largest lapping machine manufacturers and suppliers with over 350 installations throughout India mainly to the valve and pump industry. When a buyer in Japan wanted a super finishing machine for lapping large granite plates, GMT designed and supplied a 3000mm dia flat lapping machine.
Pump manufacturers, electronic industries, valve manufacturers, etc., have found a sure way of obtaining positive sealing.
Hot working and cold working of metals β Forging processes β Open, impression and closed die forging β Types of Forging Machines β Typical forging operations. Rolling of metalsβ Types of Rolling β Flat strip rolling β shape rolling operations β Defects in rolled parts. Principle of rod and wire drawing β Tube drawing β Principles and types of Extrusion β Hot and Cold extrusion.
Model Initialization (Material Orientations using HyperForm, OptiStruct)
Post Processing for Composite Materials
Optimization of the Composite Structure
Failure Criteria for Composite Materials
GMT started manufacturing lapping & polishing machine in 1971. It was built for in-house use and until then hand lapping process was used for finishing surface plates. Critical parts for our workholding devices required a finishing and buffing machine. There was demand in the market for valve lapping and lapping plates. Since GMT had a captive ferrous foundry, castings for lap plate and lapping machine were easier to source.
Since then GMT has become one of India's largest lapping machine manufacturers and suppliers with over 350 installations throughout India mainly to the valve and pump industry. When a buyer in Japan wanted a super finishing machine for lapping large granite plates, GMT designed and supplied a 3000mm dia flat lapping machine.
Pump manufacturers, electronic industries, valve manufacturers, etc., have found a sure way of obtaining positive sealing.
Hot working and cold working of metals β Forging processes β Open, impression and closed die forging β Types of Forging Machines β Typical forging operations. Rolling of metalsβ Types of Rolling β Flat strip rolling β shape rolling operations β Defects in rolled parts. Principle of rod and wire drawing β Tube drawing β Principles and types of Extrusion β Hot and Cold extrusion.
Model Initialization (Material Orientations using HyperForm, OptiStruct)
Post Processing for Composite Materials
Optimization of the Composite Structure
Failure Criteria for Composite Materials
Tool & Die is a tried and true industry. It can also be one of the most intimidating if you donβt know how it is used or when to use it. This presentation is designed to unlock some of the mysteries of Tool & Die and give you a better understanding of its uses. View this presentation and learn when and how to leverage Tool & Die in your company. Whether you have been around Tool & Die for years or you are just getting started, this presentation will enhance your knowledge and help you with upcoming decisions.
What you'll learn by viewing the webcast:
- Definition of Tool & Die
- Ways stamping dies are classified
- Operations that can be performed with stamping dies
- Different types of parts that can be made with stamping dies
- Determining precision of die produced parts
- Ways to implement Tool & Die into your organization
Isothermal forging represents a possible alternative to produce near net and net shape forgings.The basic principle of isothermal forging consists of a plastic forming process during which die and work piece temperatures are identical or very similar.
Structure and Energy of Stacking Faults - Nithin ThomasNithin Thomas
Β
In crystallography, a stacking fault is a type of defect which characterizes the disordering of crystallographic planes. It is thus considered a planar defect. In this work, a brief explanation of the structure and atomistic methods for calculation of generalised Stacking Fault Energy is presented. Two methods using Density Functional Theory namely, Axial Interaction Model (AIM) and Supercell method are discussed.
Press tool operations, Shearing action, Shear operations, Numerical problems, Drawing, Draw die design, Spinning, Bending, Stretch forming, Embossing and coining, Types of sheet metal dies, Analysis of sheet metal
Cost Reduction In Heat Treatment, Hot Rolling and Hot Forging by the use of P...Srikar Shenoy
Β
This presentation outlines an innovative concept to reduce cost in heat treatment, hot rolling and hot forging. Industrial case studies and photos are included.
Kaida Roll-mill rolls for rolling steel products mills Arry Pan
Β
Kaida Roll Group provides various grades of cast iron and steel rolls, forged rolls, rolling rings for rebar, bar & wire rod mills, strip & plate mills, section mills applications.
This presentation is for mechanical engineering/ civil engineering students to help them understand the different type of destructive mechanical testing of materials. The tensile testing, hardness, impact test procedures are explained in detail.
Tool & Die is a tried and true industry. It can also be one of the most intimidating if you donβt know how it is used or when to use it. This presentation is designed to unlock some of the mysteries of Tool & Die and give you a better understanding of its uses. View this presentation and learn when and how to leverage Tool & Die in your company. Whether you have been around Tool & Die for years or you are just getting started, this presentation will enhance your knowledge and help you with upcoming decisions.
What you'll learn by viewing the webcast:
- Definition of Tool & Die
- Ways stamping dies are classified
- Operations that can be performed with stamping dies
- Different types of parts that can be made with stamping dies
- Determining precision of die produced parts
- Ways to implement Tool & Die into your organization
Isothermal forging represents a possible alternative to produce near net and net shape forgings.The basic principle of isothermal forging consists of a plastic forming process during which die and work piece temperatures are identical or very similar.
Structure and Energy of Stacking Faults - Nithin ThomasNithin Thomas
Β
In crystallography, a stacking fault is a type of defect which characterizes the disordering of crystallographic planes. It is thus considered a planar defect. In this work, a brief explanation of the structure and atomistic methods for calculation of generalised Stacking Fault Energy is presented. Two methods using Density Functional Theory namely, Axial Interaction Model (AIM) and Supercell method are discussed.
Press tool operations, Shearing action, Shear operations, Numerical problems, Drawing, Draw die design, Spinning, Bending, Stretch forming, Embossing and coining, Types of sheet metal dies, Analysis of sheet metal
Cost Reduction In Heat Treatment, Hot Rolling and Hot Forging by the use of P...Srikar Shenoy
Β
This presentation outlines an innovative concept to reduce cost in heat treatment, hot rolling and hot forging. Industrial case studies and photos are included.
Kaida Roll-mill rolls for rolling steel products mills Arry Pan
Β
Kaida Roll Group provides various grades of cast iron and steel rolls, forged rolls, rolling rings for rebar, bar & wire rod mills, strip & plate mills, section mills applications.
This presentation is for mechanical engineering/ civil engineering students to help them understand the different type of destructive mechanical testing of materials. The tensile testing, hardness, impact test procedures are explained in detail.
PresentaciΓ³ del projecte de pressupost de l'Ajuntament de Cubelels per a l'exercici 2017 projectat en el transcurs de la sessiΓ³ celebrada el 3/11/16 a la sala d'exposicions del Centre Social
These slides are for the parents of 3 Honesty 2016. It explains the class structure rules and expectations for students and explains the use of Class Dojo and the Class website to the parents.
Lattice energy LLC - Climate change can reduce wind and solar power output - ...Lewis Larsen
Β
A mystery wind drought hit the U.S. in the first half of 2015. Total wind-powered electrical output in the U.S. during that year went down 6% while total installed capacity went up 9%. Thus climate change disrupts prior weather patterns which can then impact renewables. If you believe wind and solar can someday totally replace short-notice sources of dispatchable power generation then think again, because they simply canβt --- ever.
Given innate variability in power output from renewable green energy sources, substantial amounts of short-notice dispatchable generation capacity are an unavoidable necessity that, along with a shift toward distributed generation, could serve as key system components crucial to maintaining modern high-availability electricity grids that continue to provide customers with 99+ % uptime during an era of increasing climate change. Having adequate dispatchable power generation capacity on-hand would thus be invaluable in helping to insure reliable, low-cost energy production and prudent risk management with respect to sudden unexpected βBlack Swanβ events such as extremely large volcanic eruptions and violent earthquakes that can adversely impact power generation by renewables.
Published peer-reviewed data suggests that it would also be prudent for global society to reduce future CO2 emissions from power generation activities. This will eventually happen anyway because at current rates of consumption, British Petroleum has estimated that fossil fuel resources will be totally exhausted in <150 years. Well, nuclear power plants are dispatchable and do not emit any CO2. Like it or not, major worldwide expansion of nuclear power generation is probably inevitable and could play a key strategic role in the long-term future of energy. In that regard, if safe radiation-free ultralow energy neutron reactions (LENRs) are successfully commercialized for producing green nuclear power, they could someday provide the future βenergy miracleβ sought by Bill Gates.
Internet is the valuable source of education to entertainments stuff. To know how to handle internet is a demand of time. This slide show helps you to know how to handle internet especially Google and Google Scholars among with others..........
Few players in our industry invest in a real R&D center at all, and at a significant and critical size. We continue to believe Al is the ideal solution for many markets.
To remain a material of choice, more innovation around Al is required, both incrementally and with breakthroughs.
C-TEC is the engine to deliver on this innovation.
This presentation is made on the Evolution of Additive Manufacturing. It has a brief description of Additive Manufacturing. It also has a history of Additive Manufacturing, followed by how 3D printing technology was developed and printers were evolved. Also, how it gained media attention and also its application in various fields are covered.
Is Additive Metal Manufacturing the Next Technological Wonder Drug? An article in Canadian Metalworking Magazine reviewing AMM's success with their two (2) EOS Model M290 e-Manufacturing DMLS Systems.
Similar to Analysis and Structural Optimization of a Mechanical Component for the Heavy Industry (20)
Is Additive Metal Manufacturing the Next Technological Wonder Drug?
Β
Analysis and Structural Optimization of a Mechanical Component for the Heavy Industry
1. POLITECNICO DI TORINO
Master of Science in Mechanical Engineering
Analysis and Structural Optimization of a
Mechanical Component for the Heavy Industry
Supervisors:
Illustrious Prof. Dr. Eugenio Brusa
Illustrious V. P. Andrea De Luca (Company Tutor)
Speaker:
Alessandro Musu
Academic Year 2015-2016
3. 3
Dedico questo mio modesto lavoro
che si pone a conclusione di un lungo e faticoso cammino
alle persone che hanno reso possibile che tutto ciΓ² si avverasse.
Ai miei genitori,
alle mie sorelle Giulia e Valentina,
per avermi aiutato e incoraggiato
nelle difficoltΓ incontrate in questi anni.
Ai miei amici piΓΉ cari,
per non aver mai smesso di credere in me.
Al mio relatore, Prof. Eugenio Brusa,
per la disponibilitΓ , la pazienza, lβincoraggiamento
e per avermi aiutato nella stesura di questa tesi.
5. 5
Table of Contents
Prefaceβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦...β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..7
1. Introduction to Structural Optimizationβ¦..β¦.β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦11
1.1. Structural Optimization in the Design Process..β¦.β¦β¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦β¦15
2. Shape Optimizationβ¦β¦β¦β¦.....β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦...17
2.1. Shape Optimization Methodsβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦...β¦β¦β¦β¦β¦..β¦β¦β¦β¦β¦β¦β¦..19
3. Topology Optimization.β¦β¦β¦β¦..β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..21
3.1. Minimum Compliance Design Formulationβ¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦β¦...β¦β¦β¦β¦..21
3.2. Conditions of Optimalityβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦β¦β¦..β¦β¦β¦β¦β¦26
3.3. Computational Procedureβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.β¦β¦β¦β¦β¦29
3.4. Analysis Refinement and Issuesβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦β¦β¦β¦β¦β¦β¦34
3.5. Bi-Directional Evolutionary Structural Optimization Methodβ¦β¦β¦β¦..β¦β¦.β¦36
3.6. BESO with Material Interpolation Scheme and Penalizationβ¦β¦β¦β¦β¦..β¦β¦....45
3.7. BESO for Extended Topology Optimization Problemsβ¦β¦β¦β¦β¦........................47
3.7.1. Minimizing Structural Volume with a Displacement Constraint...........48
3.7.2. Topology Optimization for Natural Frequency...β¦β¦β¦β¦β¦β¦...................51
3.7.3. Topology Optimization for Multiple Load Cases...β¦β¦β¦β¦β¦β¦..β¦β¦β¦.β¦54
3.7.4. BESO based on von Mises Stressβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..54
4. Structural Optimization with Inspireβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦57
4.1. Optimization Terminology and Definitionsβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦...63
6. 6
5. Structural Optimization: a Practical Exampleβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦...65
5.1. DANIELIβs SRW 18 Guide System Descriptionβ¦β¦β¦β¦β¦β¦β¦β¦.β¦β¦β¦β¦β¦β¦β¦β¦.67
5.2. Loads and Constraints Analysisβ¦β¦β¦β¦β¦β¦...β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.72
5.3. Finite Element Analysis of the Current Roller Holderβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.75
5.4. Structural Optimization Settingβ¦β¦β¦β¦β¦β¦...β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.79
5.5. Maximize Stiffness - Resultsβ¦β¦β¦β¦β¦β¦...β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦..β¦β¦.87
5.5.1. Free Shape Optimization, Stainless Steel 316L.β¦β¦β¦.β¦β¦.......β¦β¦β¦β¦β¦87
5.5.2. Extrusion along Z-axis Optimization, Stainless Steel 316L.β¦..β¦...β¦β¦..89
5.5.3. Extrusion along Y-axis Optimization, Stainless Steel 316L.β¦β¦.β¦β¦β¦..90
5.5.4. Free Shape Optimization, CoCrMo Alloy.β¦β¦.....β¦β¦β¦..β¦β¦β¦...β¦β¦β¦β¦β¦92
5.5.5. Extrusion along Z-axis Optimization, CoCrMo Alloyβ¦.β¦..β¦..β¦β¦β¦..β¦...93
5.5.6. Extrusion along Y-axis Optimization, CoCrMo Alloyβ¦β¦..β¦..β¦.β¦β¦β¦..94
5.5.7. Free Shape Optimization, Ti6Al4V-ELI Alloyβ¦β¦β¦..................β¦β¦β¦β¦β¦96
5.5.8. Extrusion along Z-axis Optimization, Ti6Al4V-ELI Alloy..β¦β¦β¦β¦β¦β¦..97
5.5.9. Extrusion along Y-axis Optimization, Ti6Al4V-ELI Alloy...β¦.β¦..β¦β¦β¦..99
5.6. Minimize Mass - Results.β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.β¦β¦β¦β¦β¦β¦β¦β¦101
5.6.1. Free Shape Optimization, Stainless Steel 316Lβ¦....β¦β¦β¦β¦β¦β¦β¦β¦β¦..101
5.6.2. Extrusion along Z- and Y-axis Optimization, Stainless Steel 316L..β¦103
5.6.3. Free Shape Optimization, CoCrMo Alloyβ¦β¦β¦β¦..β¦...β¦β¦β¦β¦β¦β¦β¦β¦...105
5.6.4. Extrusion along Z- and Y-axis Optimization, CoCrMo Alloyβ¦.β¦.....β¦.106
5.6.5. Free Shape Optimization, Ti6Al4V-ELI Alloy..β¦.β¦...β¦β¦β¦β¦β¦β¦β¦β¦β¦.108
5.6.6. Extrusion along Z- and Y-axis Optimization, Ti6Al4V-ELI Alloy.........110
6. Conclusion..β¦β¦β¦..β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..β¦β¦β¦β¦β¦113
7. Bibliography and Referencesβ¦β¦β¦.β¦β¦.β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.117
7. 7
Preface
This innovative work studies a practical application that a cutting edge technology
such as Additive Manufacturing offers to the heavy industry sector.
Additive Manufacturing (from now on referred as AM) is that set of new
technologies which includes three-dimensional components production by adding
layer upon layer of material, whether this material may be plastic, metal or concrete.
AM application is limitless. Although its main early application was rapid
prototyping in the form of pre-production visualization models, nowadays is being
developed to produce high performance lightweight components for the aircraft
industry but also shows growing applications in medical and automotive sectors.
The main common characteristics of AM technology are the possibility to obtain
complex shapes, with internal canals and ducts, to regulate density properties of the
material if a sponge-like structure is needed or to re-design a component to improve
its performance in critical areas or even to reduce its weight.
Another major advantage is the possibility to accelerate design and production
processes, it is in fact possible to design a particularly complex component with a
computer aided design software and then, once design process is completed,
proceed to produce the component on a suitable AM machine by inserting into the
machine software the design file with all the specification of the part. With this
technology, design and production processes may take some weeks whereas with
conventional production processes may require some months, due to several
different production steps.
Apparently, once the production process is completed, the new component is ready
to be employed and does not require further tooling processes; there is however a
limit, which is worth being mentioned here, that is given by the often poor surface
finish that is currently achievable with the current technology and that usually
require further surface treatment to make the component suitable for its
application.
The other main disadvantage is the particularly high production costs caused by AM
technology and that ranges from metallic powder production to AM machine
development; however it is commonly believed that the actual development and
improvement trend will make the technology easily accessible and less expensive in
the near future.
All in all, AM is a technology which development is finally experiencing an
exponential growth in recent years and with this process, new applications for the
technology are being discovered every day.
The aim of this work is to define a suitable design process to rethink current
components for heavy industry in order to take full advantage of all the possibility
AM technology currently offers and which are already being discovered by other
main industrial actors.
Even though a free forming technology can provide huge benefit for any industrial
application, heavy industry requirements are generally different from aerospace,
medical and automotive ones, in fact the latter consider lightweight characteristic a
key factor. Some example may clarify this point: lightweight components make
aircrafts reduce their fuel consumption; lightweight components in cars wheel
struts make cars more stable since their unsuspended mass is reduced; a
lightweight internal prosthesis will certain make injured people recover faster and
8. 8
prosecute their life with less ailment; thousands of other examples may be done
following this reasoning line.
On the other side, heavy industry applications must withstand heavy loads for long
periods of time in generally hostile environments, from both chemical and thermal
points of view. So lightweight components may be interesting when they allow
reducing metallic material consumption and waste, but they still have to fully
withstand the heavy applications that the components they are replacing were
thought for.
Heavy industry requirements can easily exclude any application of plastic materials,
except maybe for insulating components or to build visualization models; but AM
technology is now being applied to a broader range of metallic materials and alloys
than ever.
AM technology may be a winning choice when it is used to reduce the number of
secondary machining operations, to build single-piece components out of
components that previously were made out of several different pieces welded
together, to reduce the quantity of metal waste caused by subtractive
manufacturing, to produce complex mould components with inner channels to
improve heat dissipation and solidification processes and so on. It is thus clear that
Additive Manufacturing offers different applications also to the heavy industry
sector and from this acknowledgement is born the interest of Danieli Group for the
technology.
Danieli Group is an Italian multinational company that is specialized in building and
development of turnkey plants for heavy industry. The company is also a worldwide
benchmark for ironmaking and steelmaking production, flat products, long products
and non-ferrous metal production; within this broad production range, it has
included a special attention for sustainable production, material recycling and
environment pollution.
Danieli Groupβs Head Quarters are located in Buttrio (Udine, Italy) where the
companyβs main technical and administrative offices are located.
The Group has seven different workshops across Italy and many others in several
foreign countries across Europe but also in Russia, Thailand, China, India and the
U.S.
Danieli Group is proud of an asset of more than ten thousand employees and an
overall production area of more than 2M m2; this remarkably broad extension is the
result of more than a hundred years of growth and development.
During its history, Danieli Group has become well aware that their know-how
reflects not only technological process and design but also manufacturing capability:
this awareness has pushed the company towards the decision to develop an in-
house ability to fully control any detail of their design and production processes, in
order to avoid any compromise on quality and reliability levels of the services and
equipment supplied. In this way, any workshop area across the world is owned and
managed directly by Danieli and operates according to Danieliβs manufacturing
know-how to guarantee the same quality level worldwide.
Danieli Group manufactures most of its equipment in its own workshops in Italy,
China and Thailand obtaining the competitive production costs and the desired
quality standard.
Over the years, Danieli Group has strengthen its technical know-how regarding any
step of metal production, from iron ores to electric arc furnaces, from hot and cold
rolling mills to recycling plants, with an exceptional specialization on flat and long
9. 9
steel products production and a further specialization on non-ferrous metallic
materials products to broaden its production range.
During the last eight years, Danieli Group introduced a large number of innovations,
thanks to the investment of an average of 140M Euro/Year in research and
development over the last eight years [13].
In conclusion it is presented a general description about this work that has been
divided into five chapters.
The first chapter is an introduction to the reasoning methodology behind structural
optimization, a simplified structural optimization process is described together with
a comparison between the classic design process and the structural optimization
process: the importance of defining the optimization objective and the influence of
design variables on the design process are presented in this chapter.
The second chapter describes a simplified optimization process called shape
optimization, this optimization process can lead to minimization of mass by
changing or determining boundary shape while satisfying all design requirements;
shape optimization comes from the need to achieve best results with limited
resources.
The third chapter presents two of the main topology optimization methods:
Minimum Compliance Optimization method and Bi-Directional Evolutionary
Structural Optimization (BESO) method; the two methods are presented from the
point of view of their mathematical implementation. Topology optimization is a
phrase used to describe design optimization formulation that allows the prediction
of the layout of a structural mechanical system: the topology is an outcome of the
optimization procedure.
The fourth chapter presents an introduction to the software used to carry out a
practical example of structural optimization: Altairβs solidThinking Inspire. In the
chapter it is described the way the software works to compute an optimal solution
out of a problem together with the most important methods used to define the
optimal solution itself.
The fifth and last chapter presents a practical example of structural optimization
applied to a mechanical component for the heavy industry sector: here a
methodology to carry out the structural optimization process is presented along
with some possible optimal results; however the limited amount of data on the
component working conditions has prevented the research to come to an univocal
and reliable design to be ready for production with Additive Manufacturing
technology.
Finally, conclusions are drawn on the basis of the findings of the present study.
11. 11
1. Introduction to Structural Optimization
The aim of this research is to define a procedure to improve design process in order
to produce better components or parts than the ones that are currently employed so
to make parts that are lighter, stiffer and perform in a better way. Improvements in
terms of structural aspects of parts design consist in defining better design
procedure, better materials and better ways to manufacture the final products: this
process of improving a component, considered as an evolution of its properties and
capacity is called optimization.
Optimal structural design is becoming increasingly important due to the limited
material resources, environmental impact and technological competition, all of
which demand lightweight low-cost and high-performance structures.
Optimization is defined as the process of selecting the best variables from a wide
range of feasible solutions and can be applied to many different fields, such as
aircraft, automotive and medical industry. The following are some practical
examples:
ο· Design of a bicycle frame for minimum weight;
ο· Design of a beam for maximum stiffness;
ο· Design of a bridge for lowest natural frequency;
ο· Design of thermal conduction systems to maximize heat transfer.
In this research will be presented an example of structural optimization problem.
Structural optimization has the objective to make assemblage of materials that can
best sustain the applied load; the objective can be to minimize the component
weight or to maximize its stiffness (which is the same of minimize its compliance).
To achieve this goal certain constraints must be applied to the problem, to name a
few, these constraints may be on the volume of material, on maximum
displacements or on maximum allowed stress. Figure 1 below explains this concept.
Figure 1: Structural optimization problem [5].
The optimization process consists in finding the best possible way to minimize or
maximize the objective function inside the design domain, so the applied load is
12. 12
transmitted to the supports in the most efficient and safe way, keeping all the
defined constraints in check. The following task formulation is typical:
ο· Minimize weight of a carrying structure so that the allowed tensions and a
certain distortion are not crossed;
ο· Maximize the first natural frequency so that the weight is identical to the
initial design and higher natural frequencies are not reduced.
In order to carry out correctly a structural optimization process, it is often needed a
trusted model of the mechanical behaviour, called analysis model, which is a key
component in the process. The analysis model can be based on different methods:
for example for simple problem definitions analytic approaches are enough;
commonly numerical methods such as finite elements methods are employed and
sometime this model can also be built up from available data points.
In the broadest general form, the aim of structural optimization it to improve a
component behaviour concerning all the given requirements; the definition of these
requirements is the first step in a structural optimization process.
The questions are:
ο· Definition of the optimization objectives;
ο· Definition of design variables, also called influence parameters, which allow
the process to meet the set objectives.
The second step is the definition of the design variables; in this case the questions
are:
ο· Definition of the dimensions in the analysis model that are possible to
change;
ο· Expected dimension influence on the component behaviour.
The core passage of the optimization procedure consists is coupling the analysis
model with an optimization algorithm able to modify the design variables so that
the component behaviour and performance are improved. Once all design variables
have been defined from the initial design, the assignment is analysed and evaluated;
the optimization algorithm improves the component in an iterative loop that has to
be gone through again and again until the optimum design is reached.
Topology optimization of continuum structures is by far the most challenging
technically and at the same time the most rewarding economically. Rather than
limiting the changes to the sizes of structural components, topology optimization
provides much more freedom and allows the designer to create totally new and
highly efficient conceptual designs for continuum structures; topology optimization
can be applied to large-scale structures such as buildings and bridges but also to
design materials at micro- and nano-levels.
Most of the methods developed for topology optimization are based on finite
element analysis where the design domain is discretized into a fine mesh of
elements: in such a setting, the aim is to find the topology of a structure by
determining for every point in the design domain whether there should be material
(solid element) or not (void element).
13. 13
In conclusion of this introductive chapter, a simplified scheme of the structural
topology optimization process is reported below in Fig. 2.
Figure 2: Scheme of the topology optimization process [4].
In a general way, structural optimization tasks are classified based on the kind of
design variables because afterwards the applied solution strategies are selected
according to different considerations; there are then three main design methods,
called size optimization, shape optimization and topology optimization, they are
represented in Figure 3.
ο· Size Optimization β design variables are the parameters that dictate the size
of the structure: this often consists in computing the optimal cross-sectional
area of each strut in a truss structure or the optimal wall thickness.
ο· Shape Optimization β design variables describe the shape of the component
boundary: the optimal form or shape that defines the boundary curves and
surfaces of the body is computed and the bringing in of new structural
elements like cavities and braces is excluded.
ο· Topology Optimization β this process has the aim to determine the areas of a
component where material has to be added to improve overall performance
of the component itself and the areas where material can be removed to
reduce its weight without compromising its structural performance.
Size and shape optimizations allow the material distribution in the structure to
satisfy certain loading conditions without modifying the topology of the component;
on the other hand, initial and optimized structures are completely different after a
process of topology optimization.
The optimization task has to be defined exactly through the definition of a
specification list that should contain the available possibilities for the change of the
structure (design variables), the requirements for the component (objective and
restriction functions) and the load cases to be considered. The specification list
14. 14
should contain all requirements, constraint functions and load cases otherwise the
solver will define an optimum design that is not able to fulfil completely the
functions to which is destined.
Figure 3: Classification of structural optimization tasks for a bridge [4].
Each analysis model is referred to single design conditions: to adjust to the variation
in the design domain, the analysis model must be able to automatically update, this
variation is decided on the input side with the design variables. On the output side
the numerical values with which the objective and constraint functions have to be
evaluated are then selected. Input and output of the analysis model occur with
variable parameters; if this configuration cannot be used directly, the analysis
model must become parameterized so must be constructed is such a way it can be
modified by modifying its parameters. In this way a universally working algorithm
can be applied to different tasks.
All requirements that a component must fulfil have to be considered in the
optimization process; requirements have to be parameterized in order to compare
them to the parametric structural responses. As an example, for the requirement
βthe maximum stress in the component have to be up to 100 N/mm2β, all stresses
have to be evaluated in the component and the highest stress value have to be
compared to the defined stress limit. In this example, parameterization of the
requirements appears to be very simple, however it becomes difficult with
requirements like appearance, haptic or acoustic properties; these requirements
must be described by physical dimensions and sometimes they require specific field
studies.
In the simplest case, a goal or a constraint function correspond directly to a certain
parameters in the output file of the simulation program, although this condition is
rather rare; generally the output has to be processed with additional routines.
Whenever requirements for the structure can be described as a function value of a
domain integral, even this function value must be considered in the optimization
process for these requirements; this a beneficial effect on the optimization process.
Particular importance comes up to these domain integrals: for example, in a lot of
sectors, mass has a very important meaning; with constant density, mass correlates
with the simplest domain integral, the volume of the component.
15. 15
1.1. Structural Optimization in the Design Process
One thing that is in common to all engineering sectors is the fact that designers are
constantly under pressure to create better products in less time and at a lower
price: this is why optimization plays such an important role in product design.
By considering the βtypicalβ design cycle (Fig. 4), this almost always originates with
a drawing β a sketch to illustrate a concept β and almost always ends with a drawing
β the manufacturing drawing.
Figure 4: Typical design cycle [4].
The biggest problem consists is translating the sketch into an acceptable and
manufacturable design. Common trade-offs of the typical design cycle are often
appearance vs. function, cost vs. ease of manufacture, etc. Each single trade-off
affects the design in different ways.
In the conventional design process, the designer would have to rely on experience
or insight to come up with acceptable proposals. An analysis tool is then used to
evaluate each proposal so the designer can use these results to choose the βbest
proposalβ among the available ones.
Figure 5: Typical design cycle vs. optimization driven design cycle [4].
In this process, design optimization becomes a part of Computer Aided Engineering
(CAE) and allows to think the new design and to analyse it at the same time: the
designer outlines the constraints and leaves them to the optimization tool to
produce compliant proposals, the optimizer tool itself uses the analysis tool to
decide how to modify the initial design to produce a better one (Fig. 5).
Often the shapes and sizes proposed by the optimization tool are the ones that are
most likely to pass the subsequent analystβs verification.
16. 16
This process is called Optimization Driven Design; the designer should always look
for an optimum design.
At this point it is fundamental to understand what an βoptimum designβ is and how
to recognise it. A good point to start is the dictionary definition of the word
βoptimumβ: optimum is the greatest degree or best result obtained or obtainable
under specific conditions where βspecific conditionsβ refers to the allowed design
freedom and differs for each application.
The designer defines the conditions to evaluate all the design alternatives: in
engineering terms this means drawing up mathematical equations that quantify the
performance of a certain design. The quantitative parameter used to evaluate a
design result is called βobjectiveβ; unfortunately in many cases different objectives
are required and those are often contradictory, making difficult for the designer to
define the best compromise.
A working design almost always involves a compromise of some sort, especially
because very few designers have the luxury of βinfiniteβ resources to pursuit their
objectives.
These limits then give rise to the concept of constrained optimization and a design
that satisfies the constraints is called a feasible solution.
It is important to note that not all design are done from scratch and the optimization
philosophy can be applied also to existing designs in order to improve them to the
best extent possible. In this case, things are a little harder since the flexibility to
modify things is often much lower. One further requirement consists in the
necessity of a component to fit within an assembly of other components: this implies
to work with a package space within which the component needs to fit and with
assembly points that cannot be varied. From a mathematics point of view, the
package space is considered as a design space or optimization domain. Finally, often
is not possible to change every possible parameter: those parameters that it is
possible to vary are called design variables.
The dependence of the objective on the design variables is expressed as an equation,
called objective function. The statement of design optimization problem consists of:
ο· Package space,
ο· Design variables,
ο· Constraints,
ο· Objectives.
All of the above mentioned requirements must be satisfied so that the new design
proposals are useful in any way.
17. 17
2. Shape Optimization
Shape optimization has been implemented into several commercial finite element
programs to meet industrial need to lower cost and improve performance. In this
chapter the geometric boundary method of optimization is presented: geometric
boundary method defines design variables as Computer Aided Design (CAD) based
curves: surfaces and solids are created and meshes are generated within finite
element analysis whereas shape optimization is performed outside of finite element
program.
Shape optimization can lead to minimization of mass by changing or determining
boundary shape while satisfying all design requirements; shape optimization comes
from the need to achieve best results with limited resources. β¨ The simplest shape
optimization problem is the isoperimetric problem that regards the determination
of the shape of a closed curve of given length and enclosing the maximum area on a
plane. The result of this simple problem is that the maximum area is enclosed by a
semicircle, as shown in Figure 6 below.
Figure 6: Isoperimetric problem [1].
In engineering field, Galileo was the first to define a shape optimization problem in
1638 at his book titled βDialogues Concerning Two New Sciencesβ where he
presented a logical definition and solution for the shape of a cantilever beam for
uniform strength (Fig. 7).
Figure 7: Galileoβs shape optimization problem and solution for a cantilever beam [1].
18. 18
Shape optimization has been a topic of in-depth research for the last three decades
and structural optimization methodologies have broadly been implemented into
commercial Finite Element (FE) software: it is nowadays possible to treat large
shape changes without mesh distortion during shape design process. The mesh
distortion problem regards the areas of the domain where the mesh elements
cannot keep the pre-imposed shape and size (as for example rectangular, triangular,
etc.) and consequently the mesh in these areas is not regular; this problem heavy
affects the results and the computation time of the process. A complete presentation
of the shape optimization problem and its solution schemes, together with some
limits of the method, is described here.
Structural problem can be governed by means of the principle of virtual works for a
deformable continuum body in static equilibrium under the action of the external
force fi and surface traction ti0 as follows:
β« ππ πΏπ’πdVπ
+ β« π‘π
0
πΏπ’π πΞπ‘Ξ π‘
= β« πππ πΏπ’π,πdVπ
(2.1)
π’π = π’π
0
ππ Ξπ’ πππ π‘π = π‘π
0
ππ Ξπ‘ (2.1)
Where πΏπ’π is the admissible virtual displacement. V denotes the domain during the
analysis phase while π€π’ and π€π‘ are displacement and traction boundaries
respectively; Fig. 8 represents the described domain.
The shape optimization problem can be defined as follows: find the boundary of
V(π€) to minimize a cost function m(V, π’π) subject to:
ππ(V, π’π) β€ ππ
0
πππ π‘π β π(V, π’π) β€ β π
0
(2.2)
π’π satisfies the governing equations; ππ(V, π’π) and β π(V, π’π) denote inequality and
equality constraints respectively. Each constraint describes a design requirement.
Figure 8: Deformable body with applied external loads [1].
It is important to note that shape optimization problems may have multiple
solutions and a unique solution is not guaranteed, mostly because design problems
are often ill posed; moreover the final design domain cannot be known at priori. The
goal in this case is not to produce an absolute optimum design but to improve the
design within a neighborhood of small changes.
Shape optimization based on Finite Element Analysis (from now on FEA) has
received continuously growing interest in the practical design since FEA can replace
19. 19
physical experiments in many engineering fields. On the opposite side, it is almost
impossible to provide continuous shape changes during shape optimization without
distorting mesh elements of FEA. Mathematical representation of geometric
boundary, mesh generation and manipulation affects extensively the result of
optimization process; currently design boundaries can be parameterized using
parametric language of FE solver.
Techniques for representation of geometric boundary and geometric boundary
method of formulation are discussed below.
2.1. Shape Optimization Methods
Element Nodal Coordinate Method β It is an early method for shape optimization
using finite element nodal coordinates as design variables; it is commonly affected
by deterioration of mesh quality caused by relocation of nodal boundary points: this
effect easily leads to unacceptable results; an example of unacceptable result is
shown in Fig. 9.
Figure 9: Shape optimization problem of a square plate [1].
To limit the mesh distortion, additional constraints must be added to control the
movement of each nodal coordinate; it is a process of trial and error. The general
configuration to implement this method requires integrating with a CAD system to
define suitable design boundary and with a good mesh generator to update the
finite element model while changing design variables.
Boundary shape can also be obtained as a linear combination of several different
basis shapes represented by boundary elements or fictitious loads (to control nodal
points movement). In order to characterize shape changes with a reasonable (finite)
number of design variables, the reduced-basis method uses few design vectors to
exhaustively describe shape changes in finite element analysis.
Geometric Boundary Method β Geometric boundary can be defined by CAD-based
curves, which are referred to as geometric boundary method. For shell type
structures appropriate curves are predefined, surface is generated from those
predefined curves and an automatic mesh generator creates the meshes for the
surface. Once design in changed, also CAD-based curves are changed; then surface
20. 20
modification and new mesh generations are sequentially followed during shape
optimization procedure.
For each shape optimization study, it is possible to define one or more design
variables for each axes, more design variables on a single axis allow obtaining a
better shape. For the simplest case of one design variable on each axes, the shape
optimization problem can be presented as follows:
Minimize Οmax(wi) (2.3)
Subject to m(wi)β€m0 (2.4)
The solution of the shape optimization problem using finite element method for the
analysis procedure has to be able to handle the shape variation introduced after
each optimization iteration, these changes often require the construction of a new
discrete model of the structure after each optimization step. The mesh model should
be updated automatically and directly from the design variables used to
parameterize the shape.
The complication in the boundary method lies almost entirely in the analysis and
design sensitivity whereas the optimization process is simplified by the small
number of design variables that are typically used in this kind of problems.
For highly time-consuming simulations, high-fidelity simulation models can replace
approximated models, to efficiently predict performances during shape
optimization process; approximate models need to be employed to predict the
performance of the actual component.
Shape optimization can be employed for daily computer aided design tool because
the manual efforts to integrate CAD systems, finite element programs and optimizer
at high fidelity levels have been considerably reduced over the recent years. Design
variables for shape optimization problems have been implemented into commercial
finite element codes by using geometric boundary method: curve, surface and mesh
generations are performed in the finite element software by using the parametric
language.
Often shape optimization software is integrated with topology optimization
software in order to convert the optimum topology into an initial shape for the
shape optimization process.
24. 24
describe the geometry of holes at micro-level, it is on these variables that the
optimization process should be applied.
For any material consisting of a given elastic material with microscopic inclusions of
void and intermediate values of base material density, will provide the structure
with strictly less than proportional rigidity.
In an optimal structure there should be density values of 0 or 1 in most elements;
this depends directly on the choice of the microstructure since the use of an optimal
microstructure results in a very efficient use of intermediate densities of material. A
body with an optimal distribution of material in considered as formed out of cells of
infinitely small dimension and infinitely big number. It is however possible to
regularize the minimum compliance problem formulated as 0-1 problem by
restricting the possible range of material sets to measurable sets of bounded
perimeter, as for example constraining the total length of the boundaries of the
structure.
The imposition of a constraint does not change the discrete valued nature of the
problem and the perimeter constraint is particularly valuable because prevents the
formation of microstructures with rapid variation of material density or rapid
variation of material thickness.
Material distribution approach to topology design of continuum structures allows
describing structure by density of material. From now on, the porous material with
microstructure is constructed from a basic unit cell consisting at macroscopic level
of material and void; the body is then composed of infinitely many of such cells, of
infinitely small dimension and repeated periodically through the medium. It is
possible to have also continuously varying density of material through the medium
as required by topology optimization problems.
The resulting medium is exhaustively described by effective macroscopic material
properties that depend on the geometry of the basic cell, these properties are
computed through homogenization theory formulation. Computation of these
effective properties plays a key role for the topology optimization; the
homogenization formulation is thus presented here for a two dimensional case.
Suppose that a periodic microstructure, which is a structure where the basic unit
cell is repeated throughout the whole volume of the domain, is assumed in the
neighbourhood of an arbitrary point x of a given linearly elastic structure,
periodicity is represented by the parameter πΏ (of very small value) and the elasticity
tensor πΈππππ
πΏ
has the form:
πΈππππ
πΏ
(π₯) = πΈππππ (π₯,
π₯
πΏ
) (3.18)
πΈππππ
πΏ
(π¦) = πΈππππ (π¦,
π¦
πΏ
) (3.19)
π₯, π¦ β πΈππππ(π₯, π¦) (3.20)
Where x represents the macroscopic variation of material parameters and π₯/πΏ the
microscopic periodic variation.
Suppose now that the structure is subjected to a macroscopic body force and a
macroscopic surface traction; these external loads cause the displacement π’ πΏ(π₯):
π’ πΏ(π₯) = π’0(π₯) + πΏπ’1 (π₯,
π₯
πΏ
) + β― (3.21)
25. 25
Where π’0 is the macroscopic deformation field that is independent of the
microscopic variable y.
The effective displacement field is the macroscopic deformation field that arises due
to the applied force when the rigidity of the structure is assumed as given by the
effective rigidity tensor:
πΈππππ
π»
(π₯) =
1
|π|
β« [πΈππππ(π₯, π¦) β πΈππππ(π₯, π¦)
ππ π
ππ
ππ¦ π
] ππ¦π
(3.22)
With π ππ
macroscopic displacement field that is given as the Y-periodic solution of
the cell-problem.
The variational form of the previous equation is:
πΈππππ
π»
(π₯) = πππ πβπ π
1
|π|
π π(π¦ ππ
β π ππ
, π¦ ππ
β π ππ
) (3.23)
π π(π¦ ππ
β π ππ
, π) = 0 πππ πππ π β π π (3.24)
π π denotes the set of all Y-periodic virtual displacement fields.
The effective elastic moduli for plane problems can be computed by solving three
different analysis problems for the unit cell Y; for most geometry this process has to
be carried out using finite element methods.
To use the homogenization method in an actual design process, it should be
implemented in an easy to use pre-processor and should hold for mixtures of
linearity, elastic materials and for materials with voids.
Consider now a layered material, with layers directed along the y2-direction and
repeated periodically along the y1-axis: the consequent unit cell is [0,1] Γ πΉ and the
unit fields π ππ
are independent of the variable y2.
Using periodicity and appropriate test functions and assuming that the direction of
layering coalesces with direction of orthotropy of material, the only non-zero
elements of the tensor πΈππππ are listed below:
πΈ1111,
πΈ2222,
πΈ1212(= πΈ1221 = πΈ2121 = πΈ2112), (3.25)
πΈ1122(= πΈ2211).
For a layering of two isotropic materials, with same Poisson ratio v but different
density, elasticity modulus E+ and E- and with layer thickness πΎ πππ (1 β πΎ)
respectively, the layering formulas are:
πΈ1111
π»
= πΌ1,
πΈ2222
π»
= πΌ2 + π£2
πΌ1,
πΈ1212
π»
=
1βπ£
2
πΌ1, (3.26)
πΈ1122
π»
= π£πΌ1,
πΌ1 =
1
1βπ£2
πΈ+ πΈβ
πΎπΈβ+(1βπΎ)πΈ+
, (3.27)
πΌ2 = πΎπΈ+
+ (1 β πΎ)πΈβ
(3.28)
26. 26
The above mentioned set of equations allow to define the effective material
properties of the resulting material; the elasticity moduli are computed as the
material is constructed, the resulting material properties are:
πΈ1111 =
πΎπΈ
ππΎ(1 β π£2) + (1 β π)
,
πΈ1122 = ππ£πΈ1111,
πΈ2222 = ππΈ + π2
π£2
πΈ1111, (3.29)
πΈ1212 = 0.
From the results listed above, for layered materials consisting of material and void,
the dimension two does not possess any shear stiffness.
For a computational topology design scheme that is based on equilibrium analyses
with these materials, voids should be represented by a very weak material (with a
very low stiffness, but non-zero) in order to avoid singular stiffness matrix. On the
other hand, layered materials have analytical expression for the effective elasticity
moduli that have distinct advantages for optimization.
It is important to underline that the use of homogenized material coefficients is
consistent with the basic properties of the minimum compliance problem. Consider
a minimizing sequence of designs in the set of 0-1 designs and assume this sequence
to be composed by a sequence of microcells given by a scaling factor πΏ > 0. In the
limit πΏ β 0, the design sequence has a response governed by the homogenized
parameters. It is a fundamental property of the homogenization process that the
displacement π’ πΏ(π₯) will converge weakly to the displacement π’0(π₯) of the
homogenized design. As the compliance functional is a weakly continuous functional
of the displacements, this implies the convergence of the compliance values.
3.2. Conditions of Optimality
In this chapter the necessary conditions of optimality for the minimum compliance
design problem that employs composite materials in the parameterization of design.
There are two different types of design variables: the composite material is an
anisotropic (orthotropic) material for which the angle of rotation of the unit cell is a
fundamental design variable; the other variable regards the size of the unit cell and
thus material density across the volume.
The formulation of the material distribution method for optimal continuum
structures involves working with a composite material consisting of base material
and periodically repeated micro voids. Composite materials with cell symmetry are
orthotropic, the angle of rotation of the material axes will influence the effective
compliance of the structure: it is possible to compute analytically the optimal
rotation of the cell.
Here will be derived the optimal conditions for material rotations in plane
stress/strain problems.
Assume an orthotropic material according to properties described before; in the
frame of reference given by material axes of the chosen material there is the
following stress/strain relation:
πππ = πΈππππ π ππ (3.30)
27. 27
With πΈ1111, πΈ2222, πΈ1122, πΈ1212 being the only non-zero components of the rigidity
tensor πΈππππ; πΈ1111 is assumed greater or equal to πΈ2222 and a given set πππ
π
(with k=1,
2, β¦ M) of strain field for a number of load cases.
With minimum compliance design in mind, the problem is to maximize the weighted
sum of a number of strain energy densities:
π = β π€ π [
1
2
πΈ1111 π11
π2
+
1
2
πΈ2222 π22
π2
+ πΈ1122 π11
π
π22
π
+ 2πΈ1212 π12
π2
]π
π=1 (3.31)
If the strains are expressed in terms of principal strains ππΌ
π
πππ ππΌπΌ
π
with the first
greater than the second:
π11
π
=
1
2
[(ππΌ
π
+ ππΌπΌ
π
) + (ππΌ
π
β ππΌπΌ
π
)πππ 2π π
]
π22
π
=
1
2
[(ππΌ
π
+ ππΌπΌ
π
) β (ππΌ
π
β ππΌπΌ
π
)πππ 2π π
] (3.32)
π12
π
= β
1
2
[(ππΌ
π
β ππΌπΌ
π
)π ππ2π π
]
Here π π
is the angle of rotation of the material frame relative to the frame of the k-
th principal strains. In this analysis the aim is to determine the angle Ξ of rotation of
the material relative to a chosen frame of reference that maximizes function W. Each
angle π π
is thus written as Ξ = π π
β πΌ π
where πΌ π
is the angle of rotation of the k-th
strain field.
Once the new expression for the strains is inserted in the equation for W and the
latter is differentiated, stationary condition is found:
β π€ π[π΄ π
π ππ2(Ξ β πΌ π) + π΅ π
π ππ2(Ξ β πΌ π
)πππ 2(Ξ β πΌ π
)]π
π=1 = 0 (3.33)
π΄ π
= (ππΌ
π2
+ ππΌπΌ
π2
)(πΈ1111 β πΈ2222) (3.34)
π΅ π
= (ππΌ
π
+ ππΌπΌ
π
)2
(πΈ1111 + πΈ2222 β 2πΈ1122 β 4πΈ1212) (3.35)
Stationary condition is then achieved if the following 4-th order polynomial in π ππ2Ξ
is zero:
π(π ππ2Ξ) = π4 π ππ4
2Ξ + π3 π ππ3
2Ξ + π2 π ππ2
2Ξ + π1 π ππ 2Ξ + π0 (3.36)
W is periodic so there exist at least two real roots of P; since the last equation is of
the 4-th order, it can be solved analytically. The actual minimizer of the compliance
is finally found evaluating W for four or eight stationary rotations.
For the single load case, the stationary angle π can be expressed as:
π ππ2π = 0 or πππ 2π = βπΎ
πΎ =
πΌ
π½
π πΌ+ π πΌπΌ
π πΌβ π πΌπΌ
(3.37)
πΌ = (πΈ1111 + πΈ2222) β₯ 0 πππ π½ = (πΈ1111 + πΈ2222 β 2πΈ1122 β 4πΈ1212)
With the values defined above, it is possible to maximize π (it depends on the sign of
the parameter π½, this parameter is a measure of shear stiffness of the material). For
low shear stiffness values (π½ β₯ 0) the globally minimal compliance is achieved with
π = 0; in this case the largest principal strain is aligned with the strongest material
29. 29
method. Values of π π πππ πΎ π depend on present values of the Lagrange multiplier Ξ,
the multiplier should be adjusted in an inner iteration loop in order to satisfy the
active volume constraint: the volume of the updated values of density is continuous
and decreasing function of the multiplier. The volume is strictly decreasing in the
interesting intervals, where the bounds on density are not active in all points, in this
way it is possible to determine a unique value of multiplier.
The values of the parameters π πππ π are chosen conveniently to achieve a rapid
and stable convergence on the iteration scheme, typical values of π πππ π are 0.8
and 0.5 respectively.
If the density is given through a number of other design variables describing the
micro geometry of voids, it is necessary to define updated schemes for those
variables too. The angle of rotation of the material with voids should also be
updated using the axes of principal strains or of principal stresses as axes of
orthotropy.
3.3. Computational Procedure
Homogenization modelling is based on the numerical calculation of the globally
optimal distribution of the design variables that define the microstructure being
used, this in turn determines the density distribution of material, which is the
primary target of the process.
In the following the main steps to define the optimal topology of a structure will be
described, starting from an initial layout to obtain a final optimal solution.
Step 1 β Pre-processing of geometry, of loading and of material properties,
Start the analysis by choosing a suitable reference domain (ground structure) that
allows defining surface tractions, fixed boundaries, etc. It is now possible to define
those areas of the ground structure that have to be left untouched as solid material
or as voids and the rest which represent the design area and can be modified during
the optimization process.
Once this setting procedure is completed, it is possible to construct a FEM mesh for
the ground structure, the mesh should be fine enough to describe accurately all the
areas of the structure, the mesh remains unchanged during the whole design
process. Construct now finite element spaces for the independent fields of
displacements and design variables.
The nature of the problem easily causes that the finite element models involved in
the material distribution method become large scale, especially in 3D; since the
process works on a fixed grid, no re-meshing of the design in necessary. The FEM
analysis can be further optimized if rectangular (box-like) domains are used and are
discretized with the same element throughout: in this case only one element matrix
needs to be calculated.
For large-scale computations iterative solvers and parallel implementation are able
to sensibly reduce computation time, normally solving equilibrium equations is the
most time consuming process of topology optimization problems.
Topology design problems require working with a huge number of design variables,
even though it is often possible to reduce the number of constraints in the problem
30. 30
statements. The application of an adjoint method for the computation is often
required, so here it will be presented briefly.
For the functional π’ ππ’π‘ = π π
π’, the equilibrium equation is satisfied by u so that
π²π’ β π = 0. For any vector π the relation becomes:
π’ ππ’π‘ = π π
π’ β π π
(π²π’ β π) (3.45)
After differentiating the previous relation, this can be written as:
ππ’ ππ’π‘
ππ π
= (π π
β π π
π²)
ππ’
ππ π
β π π ππ²
ππ π
π’ (3.46)
If the adjoint variable now satisfies the adjoint equation π π
β π π
π² = 0, then the
simple expression for the derivative of the output displacement is:
ππ’ ππ’π‘
ππ π
= βππ π
πβ1
β π π
π²π’ (3.47)
It is important to not neglect material characteristics. Choose a composite,
constructed by periodic repetition of a unit cell consisting of the given material with
one or more holes or a layered layout. Compute then the effective material
properties of the composite according to the homogenization theory, this allows
obtaining a functional relationship between density in the composite material and
the effective material properties of the resulting orthotropic material.
At this point it is possible to generate a database of material properties as function
of design variables with a specific set of data for each value of Poisson ratio.
Step 2 β Optimization Process
At this point it is possible to compute the optimal distribution over the reference
domain of the design variables that describe the properties of the composite
material. The process uses a displacement-based analysis together with the
optimality update schemes for density and for optimal angle of rotation for the cell-
related computation. The structure of the algorithm is the following:
ο· Analysis of the current design together with objective evaluation, the starting
design is often characterized by a homogeneous distribution of material;
The iterative part of the algorithm comprehends:
ο· For the present design defined by density and angle of rotation of cell (as
variables), compute the rigidity tensor throughout the whole structure;
ο· For the defined distribution of rigidity, compute using FEM the resulting
displacements and associated stress and strains, for each load case;
ο· Compute the compliance of this design, if there is only a marginal improvement
over the previous design, stop the iteration process otherwise continue;
31. 31
ο· Update angle of cell rotations based on the optimality criteria described before
and base this calculation on principal stresses;
ο· Update density variables according to the criteria shown before, compute energy
equations B and E from principal strains; at this step it is also possible to
compute the effective value of the Lagrange multiplier for the volume constraint.
ο· Repeat the iteration loop until the difference between two consecutive iterations
is lower than the chosen tolerance.
Normally the design process regards a structure composed of fixed areas (such as
solids and voids) and the updating of the design variables should regard only the
areas of the ground structure that are set to be re-designed.
For the problem above is thus necessary to decide at an early stage on a choice of
basic unit cells as the basis for computation of the effective elasticity moduli; the
most important quantity in this analysis is the density of material and the
underlying geometric quantities that define density are less interesting.
Continuing the dissertation on a hypothetical two-dimensional problem, micro voids
are made out of square holes in square unit cells, in this case density is described by
just one geometric variable (the length of the square sides) and can take all values
between 0 and 1; if voids are made out of circular holes in square cells, the option of 0
density is not admissible. On the other hand, using rectangular micro voids in square
cells gives a more complicated microstructure with a doubled number of geometric
variables but at the same time, experiments have proven that this choice results in a
more stable iteration history and in slightly better compliance values.
For three-dimensional problems, box-like holes in cubic cells are a simple choice of
microstructure.
The use of single inclusion cells as outlined above is not justified from a mathematical
point of view since these composites do not assure existence of solutions to the
optimization problem. These composites should be seen only as simple type of
composites useful to remove the 0-1 nature of the generalized shape problem.
It is possible to demonstrate that layered materials are able to generate the strongest
microstructure constructed from a given material, moreover the rigidity tensors are
given analytically thus simplifying the pre-processing step to define the effective
material properties. For layered material however it is necessary to use different
refinements of unit cell, this condition depends on spatial dimension and on whether
the problem is a single or multiple load problem.
From an engineering point of view, it is interesting to note that topology optimization
using square holes in square cells gives rise to very well defined designs consisting
almost of areas with material or void and very little areas with intermediate density
so with composite material; the use of layered materials on the other hand gives less
well defined shapes with larger areas of intermediate density but this design tends to
be more efficient than the design that uses single inclusion cells. The use of square or
rectangular holes in square cells at a single level of micro-geometry is considered as
sub-optimal.
However, the well-defined shapes obtained using square holes in square cells as well
as their simplicity tend to favour the use of this micro-geometry.
32. 32
Optimal
Microstructures Dimension of space is 2 Dimension of space is 3
Single Load
Rank 2 materials with orthogonal
layers along directions of principal
stress/strains
Rank 3 materials with orthogonal
layers along directions of principal
stress/strains
Multiple Loads
Rank 3 materials with non-
orthogonal layers
Rank 6 material with non-
orthogonal layers
Table 1: possible optimal microstructures [2].
Notice also that the way the angle of rotation of cells is updated in the optimality
criteria influences directly the resulting design: the layered material tend to give sub-
optimal designs if the rotation angle is updated in single steps, instead of using
alignment of material and stress/strain axes at each iteration step.
A final consideration about the optimality criteria method described above regards
that the assembled stiffness matrix is positive definite, densities are thus bounded by
the following inequalities:
π β€ π πππ < 0 πππ πΎ β€ πΎ πππ < 0 (3.48)
π πππ, πΎ πππ are the suitable lower bounds, they can never be equal to zero.
The type of algorithm such the one described above have been used to great effect in a
large number of structural topology design studies and is established for solving
large-scale problems. The effectiveness of the algorithm comes from the fact that each
design variable is updated independently of the others, except from rescaling that has
to take place to satisfy the volume constraint.
A major challenge for the computational implementation of topology design is to cope
with the high number of design variables. Optimality criteria methods were first
applied, however the use of mathematical programming algorithms typically implies
greater flexibility. The high number of design variables is combined with a moderate
number of constraints: an algorithm that is suitable to work for large-scale topology
optimization problems is the MMA algorithm (Method of Moving Asymptotes). This
method works with a sequence of approximate sub-problems that are constructed
from sensitivity information at the current iteration point as well as some iteration
history.
As conclusion of the paragraph, Figure 10 represents the flowchart of the
homogenization method for topology optimization.
34. 34
3.4. Analysis Refinement and Issues
After implementing a computational scheme for topology design along the lines for
standard sizing design problems, several additional issues have to be addressed: if
low order elements are used for the analysis the results will be affected by a lot of
checkerboard patterns of black and white elements. At the same time, refining the
mesh can have dramatic effects on the results since this allows adding finer and
finer details in the design.
Checkerboard pattern correspond to areas of a structure where the density of
material assigned to continuous finite element varies in a periodic fashion similar to
a checkerboard consisting of alternating voids and solid elements. This effect is un-
physical and results from bad FEM modelling being exploited by the optimization
procedure; at the same time the affected area seems to have the best performances,
for example a checkerboard of material in a uniform grid of square elements has a
greater stiffness than any other possible material distribution.
The occurrence of checkerboards patterns can be easily be prevented and any
method to achieve mesh-independency is also able to solve the problem when the
mesh becomes fine enough; geometry control measures also help avoiding
checkerboard formation.
A fixed scale geometric restriction on the design could be a counter-productive
solution when using a numerical method to obtain a overview on the behaviour of
optimal topology at a fair fine scale, when designing low volume fraction structures
or when composite materials are used as basis for optimization. The most general
approach is to use FEM discretization where checkerboards are not present, this
involves using high order elements for displacement and thus higher computational
cost.
Another serious problem associated with the 0-1 statement is that normally there is
not a solution to the continuous problem, since it has been presented in a
discretized form: this is drawback of making the problem sensitive to the mesh
element dimension. The physical explanation for the mesh dependent results is that
by introducing finer and finer scales, the design space is expanded and the optimal
design is not a classical solution with finite size features but a composite material
with different density throughout the volume.
For production reason, a design with fine scale variation should be avoided and a
design tool able to give a mesh-independent result is generally preferred.
Several different techniques have been proposed to limit geometric variation of the
design field by imposing additional constraints on the problem, by restricting the
size of the gradient of density distribution. This constraint can be set either on the
perimeter or on some πΏ π
-norm of the gradient, in both case experimentation is
needed to define the suitable constraint value.
An alternative consists in imposing a point-wise limitation on the gradient of
density field, in this case the constraint has an immediate geometric meaning, it may
be for example the thinnest possible features in a design. Implementation may be
problematic but can be handed easily via a move limit technique.
Another way to limit geometric variations of the design can be achieved applying
filters, in a similar way to image processing: it is possible then to work with filtered
densities in the stiffness matrix so that the equilibrium constraint is modified to the
format:
π² ππππ(π»(π))π’ = π (3.49)
35. 35
From the original format:
π² ππππ(π)π’ = π (3.50)
H denotes a filtering applied on density π. The filter may be a linear filter with a
defined minimum radius π πππ that gives the modified density π»(π) π in the k-th
element as:
π»(π) π = β π»π
π
ππ
π
π=1 (3.51)
π»π
π
=
π΄ π
π
β π΄ π
ππ
π=1
π€ππ‘β π΄π
π
= π πππβ πππ π‘(π, π) (3.52)
{π β π | πππ π‘(π, π) β€ π πππ} π€ππ‘β π = 1, 2, β¦ π
π»π
π
is the normalized weight factor and πππ π‘(π, π) denotes the distance between the
centre of the element k and the centre of element j. the convolution weight π»π
π
is
zero outside the filter area and the weight for the element j decay linearly with
distance from element k.
The filtering means that stiffness in an element depends on the density π in all
elements in a neighbourhood of the element itself, causing a smoothing of the
density. The filter radius π πππ is fixed in the formulation and implies the
enforcement of a fixed length-scale in the designs and convergence with mesh
refinement. Generally filtering results in density fields that are bi-valued, the
stiffness distribution is then more βblurredβ [2] with grey boundaries.
To implement the filter in the procedure described before for the optimization
problem is necessary to modify sensitivity information so that the stiffness matrix is
refined in such a way the sensitivity of the output displacement with respect to πk
will be affected by adjacent elements.
Compared to the other constraining approaches, the application of a filter does not
require any additional constraint to be added to the problem.
Another alternative to the direct filtering of the densities consists in filtering the
sensitivity information of the optimization problem, experience has proven that this
is the best way to ensure mesh-independency. The method works modifying
element sensitivities as follows:
ππ’ ππ’π‘Μ
ππ π
=
1
π π
β π»π
π
ππ
ππ’ ππ’π‘
ππ π
π
π=1 (3.53)
The standard expression of element sensitivities was:
ππ’ ππ’π‘
ππ π
= βππ π
πβ1
β π π
π²π’ (3.54)
Filtering on sensitivities is not the same of applying a filter H to sensitivities as the
densities in this case influence the result; however with a little extra CPU-time this
procedure comes with ease of implementation and very similar results to those
from the local gradient constraint. Sensitivity converges to the original sensitivity
when the filter radius π πππ approaches zero and all sensitivities will be equal when
π πππ β β. An interesting side effect of this technique is that it improves the
computational behaviour of the topology design procedure and allows for greater
design variation before defining an optimal solution and this is obtained thanks to
the inclusion of π π in the filtering expression.
36. 36
The filtering techniques described above can also be used to control checkerboard
formation without imposing mesh-independent length scale, in this case it is
necessary to that the filtering is adjusted to be mesh-dependent (π πππ has to vary
with mesh size).
3.5. Bi-Directional Evolutionary Structural Optimization Method (BESO)
Evolutionary structural optimization (ESO) method is based on the simple concept
of gradually removing inefficient material from a structure, in this way the structure
will evolve towards its optimal shape and topology. Even though the is it not
possible to guarantee that the method will produce the best solution, which means
that the found solution is the best among all the possible solutions, the method is a
useful tool for engineers to explore structurally efficient forms and shapes during
the conceptual design stage of a project.
Through a finite element analysis on a component, it is possible to define the stress
level in any part of its structure: a reliable indicator of inefficient use of material is
the low value of stress or strain in some part of the structure. Ideally in a structure
the stress level should be close to the same safe level. This concept leads to a
rejection criterion based on the local stress level where low-stressed areas of
material are assumed to be under-utilized and can be removed subsequently; the
removal process can be undertaken by deleting elements from the finite element
model.
A simple way to define the stress level consists in comparing the effective von Mises
stress of the element π π
π£π
with the maximum von Mises stress level of the whole
structure π πππ₯
π£π
. After the finite element analysis, all the elements that satisfy the
following rule can be removed from the model:
π π
π£π
π πππ₯
π£π < π π π (3.55)
Where π π π is the current rejection ratio.
The von Mises theory proposes that the total strain energy can be separated into
two components: the volumetric (hydrostatic) strain energy and the shape
(distortion or shear) strain energy. It is proposed that yield occurs when the
distortion component exceeds that at the yield point for a simple tensile test. This
theory is approximately acceptable for ductile materials but not for brittle ones; for
brittle materials the maximum principle stress theory in considered more correct:
according to this theory failure will occur when the maximum principal stress in a
system reaches the value of the maximum stress at elastic limit in simple tension.
Another consideration on the von Mises method regards the way it considers the
tension layout: it does not consider correctly a strong anisotropy load layout that
produces a non-homogeneous tensions layout with a prevalent tension in one
direction and moreover cannot allows to analyze each tension separately.
It is important to underline here that the optimization algorithm is not developed to
work with any condition or any material but often is improved to work only with
certain specific cases, for all the other cases a different algorithm, that uses different
tension analysis methods, may be necessary to produce trustworthy results.
37. 37
Such a cycle of FEA and element removal is repeated using the same value of π π π
until a steady state is reached, at this point there are no more elements being
deleted with the current rejection ratio.
At this stage an evolutionary rate πΈπ is added to the rejection ratio that becomes:
π π π+1 = π π π + πΈπ (3.56)
With the increased ratio the iteration process takes place again until a new steady
state is reached.
The process continues until a desired optimum is obtained, for example when there
is no material in the final structure that has a stress level lower than 25% of the
maximum allowable.
The optimization procedure is summarized in five steps, presented below:
ο· Step 1: discretize the structure using a fine mesh of finite elements;
ο· Step 2: carry out a finite element analysis for the structure;
ο· Step 3: remove all the elements that satisfy the defined rule for the stress
ratio;
ο· Step 4: increase the rejection ratio once steady state is reached;
ο· Step 5: repeat steps 3 and 4 until a satisfying optimum solution is reached.
Stiffness is one of the key factors that must me taken into account when designing a
mechanical structure, commonly the mean compliance C (inverse measure of the
overall stiffness) of a structure is considered. The mean compliance is considered as
the total strain energy of the structure (external work done by applied loads):
πΆ =
1
2
π π
π’ (3.57)
Where f and u are the external force and the displacement vectors respectively.
Even if this equation has been presented before, it is better to remember that in
FEA, the static equilibrium of a structure is expressed as:
π²π’ = π (3.58)
K is the global stiffness matrix.
When the i-th element is removed from the structure, the stiffness matrix will
change by:
βπ² = π²β
β π² = βπ²π (3.59)
In the previous equation, π²β
is the stiffness matrix of the resulting structure and π²π
is the element i-th stiffness.
The most general assumption is that vector f, the vector of the applied external
loads, does not get modified by the element removal process.
Consequently to the elements removal, the displacement and the displacement
vector may experience a variation too, expressed as:
βπ’ = βπ²β1
βπ²π’ (3.60)
38. 38
In conclusion, the mean compliance is expressed as:
βπΆ =
1
2
π π
βπ’ = β
1
2
π π
π²β1
βπΎπ’ =
1
2
π’π
π
π²π π’π (3.61)
The sensitivity is:
πΌπ
π
=
1
2
π’π
π
π²π π’π (3.62)
The previous equation indicates that the increase in the mean compliance as a
consequence of the removal of the i-th element is equal to its elemental strain
energy.
Minimizing the mean compliance (that is the equivalent to maximizing the stiffness)
through the element removal process is often the main objective of optimization
process, it can be obtained in an effective way by removing elements with the lowest
values of πΌπ so that the increase in C will be minimal.
The number of elements to be removed is determined by the element removal ratio,
which is defined as the ratio of the number of elements removed at each iteration to
the total number of elements of the initial FEA model.
The stiffness optimization procedure can be summarized into the following steps:
ο· Step 1: discretize the structure using a fine mesh of finite elements;
ο· Step 2: carry out a finite element analysis for the structure;
ο· Step 3: calculate the sensitivity number for each element;
ο· Step 4: remove a number of elements with the lowest sensitivity value
according to the predefined element removal ratio ERR;
ο· Step 5: repeat steps 2 to 4 until the mean compliance (or maximum
displacement) of the analysed structure reaches the defined limit.
Contrarily to the optimization procedure based on the stress level, the optimization
procedure for stiffness does not require a specified steady state: in this case it is
possible to improve the computational efficiency with fair less iterations but
sometimes it may result in numerical problems such as the production of unstable
structures. A sensitivity number can be derived for a displacement constraint where
the maximum displacement of the structure or the displacement at a specific
location of the structure has to be within a predefined limit.
The ESO method starts from the full structure design and removes inefficient
material from the structure according to stress and strain energy levels of the
elements: it is a simple concept, does not require sophisticated mathematical
programming techniques, it can be implemented on available FEA software
packages. One of the main advantages of the method is that it does not require
regenerating new finite element meshes even when the final structure has departed
substantially from the initial design; element removal is done by simply assigning
the material property number of the rejected elements to zero and then ignoring
those elements when the global stiffness matrix is assembled in the subsequent
finite element analysis. As more and more elements are removed, the number of
equations to be solved diminishes consequently obtaining a substantial reduction of
computation time, also for large three-dimensional structures.
To minimize the material usage under a given performance constraint, the ESO
method acts on the structure reducing its weight (or volume) by gradually removing
39. 39
material until the constraint cannot be satisfied anymore. However there is a limit in
this procedure, sometimes it is possible that part of material removed in an initial
iteration might be required later to be part of the optimal design and the ESO
method is not able to recover that material once it has been deleted from the design.
The ESO method may be able to provide an improved solution over an initial design,
but it often cannot be considered as an absolute optimum result.
The bi-directional evolutionary structural optimization (BESO) method goes past
the limits of the ESO method and allows material to be removed and added
simultaneously. In the BESO method applied to stiffness optimization, the sensitivity
numbers of void elements are estimated through a linear extrapolation of the
displacement field after the finite element analysis. At this point, the solid elements
with the lowest sensitivity number are removed from the structure and the void
elements with the highest sensitivity are changed back into solid elements. The
numbers of removed and added elements in each iteration are determined by two
unrelated parameters, RR and IR that are the rejection ratio and the inclusion ratio
respectively.
The BESO concept has also been applied to βfull stress designβ by using the von
Mises criterion where elements with the lowest stresses are removed and void
elements near the regions with the highest stressed are switched back to solid. In a
comparable way to the stiffness optimization problem, the numbers of elements to
be rejected and added are defined by a rejection ratio and an inclusion ratio.
Topology optimization main aim is searching for the stiffest structure with a given
volume of material, in BESO method a structure is optimized by removing and
adding elements: in this case the element itself is treated a design variable.
The optimization problem with the volume constraint is defined as:
πΆ =
1
2
π π
π’ (3.63)
Subjected to:
πβ
β β ππ
π
π=1 π₯π = 0 (3.64)
This problem formulation is the most used currently for topology optimization
problems. π₯π is a binary design variable that expresses the absence (0) or presence
(1) of an element. f and u are the external force and the displacement vectors
respectively, C is the mean compliance, πβ
and ππ are the total volume of the
structure and the individual volume of an element respectively.
When a solid element is removed from the structure, the change of the mean
compliance or total strain energy is equal to the element strain energy and it is
defined as the elemental sensitivity number:
πΌπ
π
=
1
2
π’π
π
π²π π’π = ΞπΆπ (3.65)
π²π is the elemental stiffness matrix and π’π is the nodal displacement vector.
When a non-uniform mesh is assigned, the sensitivity number has to take into
account the effect of the element volume, in this case the sensitivity number is
replaced with the elemental strain energy:
πΌπ
π
= ππ =
1
2
π’ π
π
π²π π’ π
ππ
(3.66)
42. 42
An efficient solution to this problem consists in averaging the sensitivity number
with its historical information, the simple averaging scheme is:
πΌπ =
πΌπ
π
+πΌπ
π+1
2
(3.70)
k is the current iteration number. After each iteration is solved, let πΌπ
π
= πΌπ that then
will be used in the next iteration: in this way the updated sensitivity number
includes the whole history of the sensitivity information in the previous iterations.
Whilst the averaging scheme affects the searching path of the BESO algorithm, its
effect on the final solution is very small when it becomes convergent and thanks to
the evolution history, the final result is highly stable in both the topology and the
objective function.
Before adding or removing elements from the current design, the target volume for
the next iteration ππ+1 needs to be given first: since the volume constraint πβ
can be
greater or smaller than the volume of the initial guess design, the target volume in
each iteration may decrease or increase step by step until the target value is
achieved. Volume evolution is expressed as:
ππ+1 = ππ(1 Β± πΈπ ) π€ππ‘β π = 1,2, β¦ (3.71)
ER is the evolutionary volume ratio. Once the volume constraint is satisfied, the
volume of the structure is kept constant for all the following iterations as ππ+1 = πβ
.
At this point the sensitivity numbers of all elements (both solid and void) are
calculated as described before; elements are then sorted according to the values of
their sensitivity numbers. A solid (1) element will be removed (switched to 0) if
πΌπ β€ πΌ πππ
π‘β
whereas a void (0) element will be added if πΌπ > πΌ πππ
π‘β
; πΌ πππ
π‘β
and πΌ πππ
π‘β
are
the thresholds sensitivity numbers for removing and adding elements respectively.
To define the thresholds sensitivity numbers there are two different procedures
that will be explained from now on.
πΌ πππ
π‘β
is defined according to the following procedure:
1. Let πΌ πππ
π‘β
= πΌ πππ
π‘β
= πΌ π‘β, πΌ π‘β can be determined by ππ+1. An example
will explain this concept easily, if there are 1000 elements in the
design domain with the sensitivity numbers listed as πΌ1 > πΌ2 > β― >
πΌ1000 and if ππ+1 corresponds to a design with 725 elements, then
πΌ π‘β = πΌ725.
2. Calculate the volume addition ratio AR which is defined as the number
of added elements divided by the total number of elements in the
design domain; if π΄π < π΄π πππ₯ (π΄π πππ₯ is the prescribed maximum
volume addition ratio) then step 3 can be skipped, otherwise
recalculate πΌ πππ
π‘β
and πΌ πππ
π‘β
as prescribed is the third step.
3. Calculate πΌ πππ
π‘β
by first sorting the sensitivity number of void elements
(0), the number of elements to be switched from 0 to 1 will be equal to
π΄π πππ₯ multiplied by the total number of element ranked just below
the last added element. πΌ πππ
π‘β
is the sensitivity number of the element
ranked just below the last added element, πΌ πππ
π‘β
is then determined as
πΌ πππ
π‘β
β€ πΌ πππ
π‘β
β πΌ πππ
π‘β
so that the removed volume is equal to
(ππ β ππ+1 + volume of added elements).
43. 43
ARmax is introduced to ensure that not too many elements are added in a single
iteration, otherwise the structure may lose its integrity when the BESO method
starts from an initial design guess; generally ARmax is greater than 1% so that it does
not supress the capability of adding elements.
The cycle of finite element analysis and element removal/addition continues until
the objective volume V* is reached and the following convergence criterion, defined
in terms of variation of the objective function, is satisfied:
πππππ =
| β πΆ πβπ+1
π
π=1 ββ πΆ πβπβπ+1
π
π=1 |
β πΆ πβπ+1
π
π=1
β€ π (3.72)
π is the allowable convergence tolerance, k is the current iteration number and N is
an integer number that normally is selected to be 5, this implies that the change in
the mean compliance over the last 10 iterations is relatively small.
The BESO method presented in this chapter is considered a βhard-killβ method due
to the complete removal of an element instead of changing it into a very soft
material; in this way the computational time is significantly reduced, especially for
large 3D structures, since the removed elements are not involved in the finite
element analysis.
44. 44
The evolutionary iteration procedure of the
presented BESO method is given as follows
and is represented in Figure 12:
1. Discretize the design domain using a
finite element mesh and assign
initial property values (0 or 1) for
elements to construct an initial
design;
2. Perform finite element analysis and
then calculate the elemental
sensitivity number;
3. Average the sensitivity number with
its history information and the save
the resulting sensitivity number for
the next iteration;
4. Determine the target volume for the
next iteration;
5. Add and delete elements;
6. Repeat steps 2-5 until the constraint
volume (V*) is achieved and the
convergence criterion is satisfied.
Figure 12: Flowchart of the BESO method [3].