1. The document describes a genetic algorithm to optimize the seismic retrofitting of existing reinforced concrete frames.
2. A sample application of retrofitting a 3D three-storey RC frame is presented to demonstrate the algorithm.
3. The optimal retrofitting solution for the sample frame consisted of concentric steel bracings in the two plane frames along the x- and y-directions, with no need for local FRP interventions on columns.
The paper examines the problem of systems redesign within the context of passive electrical networks and through analogies provides also the means of addressing issues of re-design of mechanical networks. The problem addressed here are special cases of the more general network redesign problem. Redesigning autonomous passive electric networks involves changing the network natural dynamics by modification of the types of elements, possibly their values, interconnection topology and possibly addition, or elimination of parts of the network. We investigate the modelling of systems, whose structure is not fixed but evolves during the system lifecycle. As such, this is a problem that differs considerably from a standard control problem, since it involves changing the system itself without control and aims to achieve the desirable system properties, as these may be expressed by the natural frequencies by system re-engineering. In fact, this problem involves the selection of alternative values for dynamic elements and non-dynamic elements within a fixed interconnection topology and/or alteration of the network interconnection topology and possible evolution of the cardinality of physical elements (increase of elements, branches). The aim of the paper is to define an appropriate representation framework that allows the deployment of control theoretic tools for the re-engineering of properties of a given network. We use impedance and admittance modelling for passive electrical networks and develop a systems framework that is capable of addressing “life-cycle design issues” of networks where the problems of alteration of existing topology and values of the elements, as well as issues of growth, or death of parts of the network are addressed.
We use the Natural Impedance/ Admittance (NI-A) models and we establish a representation of the different types of transformations on such models. This representation provides the means for an appropriate formulation of natural frequencies assignment using the Determinantal Assignment Problem framework defined on appropriate structured transformations. The developed natural representation of transformations are expressed as additive structured transformations. For the simpler case of RL or RC networks it is shown that the single parameter variation problem (dynamic or non-dynamic) is equivalent to Root Locus problems.
follow IEEE NTUA SB on facebook:
https://www.facebook.com/IeeeNtuaSB
Towards Exascale Simulations for Regional-Scale Earthquake Hazard and Riskinside-BigData.com
In this deck from the HPC User Forum in Tucson, David McCallen from LBNL presents: Towards Exascale Simulations for
Regional-Scale Earthquake Hazard and Risk.
"With the major advances occurring in high performance computing, the ability to accurately simulate the complex processes associated with major earthquakes is becoming a reality. High performance simulations offer a transformational approach to earthquake hazard and risk assessments that can dramatically increase our understanding of earthquake processes and provide improved estimates of the ground motions that can be expected in future earthquakes. This work will bring together a multidisciplinary team of earth scientists and earthquake engineers from the DOE national laboratory complex to develop advanced computational tools that will take full advantages of emerging, cutting-edge DOE computational platforms."
Watch the video: https://wp.me/p3RLHQ-ioE
Learn more: https://www.exascaleproject.org/advanced-simulations-for-earthquake-risk-assessment/
and
http://hpcuserforum.com
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
The paper examines the problem of systems redesign within the context of passive electrical networks and through analogies provides also the means of addressing issues of re-design of mechanical networks. The problem addressed here are special cases of the more general network redesign problem. Redesigning autonomous passive electric networks involves changing the network natural dynamics by modification of the types of elements, possibly their values, interconnection topology and possibly addition, or elimination of parts of the network. We investigate the modelling of systems, whose structure is not fixed but evolves during the system lifecycle. As such, this is a problem that differs considerably from a standard control problem, since it involves changing the system itself without control and aims to achieve the desirable system properties, as these may be expressed by the natural frequencies by system re-engineering. In fact, this problem involves the selection of alternative values for dynamic elements and non-dynamic elements within a fixed interconnection topology and/or alteration of the network interconnection topology and possible evolution of the cardinality of physical elements (increase of elements, branches). The aim of the paper is to define an appropriate representation framework that allows the deployment of control theoretic tools for the re-engineering of properties of a given network. We use impedance and admittance modelling for passive electrical networks and develop a systems framework that is capable of addressing “life-cycle design issues” of networks where the problems of alteration of existing topology and values of the elements, as well as issues of growth, or death of parts of the network are addressed.
We use the Natural Impedance/ Admittance (NI-A) models and we establish a representation of the different types of transformations on such models. This representation provides the means for an appropriate formulation of natural frequencies assignment using the Determinantal Assignment Problem framework defined on appropriate structured transformations. The developed natural representation of transformations are expressed as additive structured transformations. For the simpler case of RL or RC networks it is shown that the single parameter variation problem (dynamic or non-dynamic) is equivalent to Root Locus problems.
follow IEEE NTUA SB on facebook:
https://www.facebook.com/IeeeNtuaSB
Towards Exascale Simulations for Regional-Scale Earthquake Hazard and Riskinside-BigData.com
In this deck from the HPC User Forum in Tucson, David McCallen from LBNL presents: Towards Exascale Simulations for
Regional-Scale Earthquake Hazard and Risk.
"With the major advances occurring in high performance computing, the ability to accurately simulate the complex processes associated with major earthquakes is becoming a reality. High performance simulations offer a transformational approach to earthquake hazard and risk assessments that can dramatically increase our understanding of earthquake processes and provide improved estimates of the ground motions that can be expected in future earthquakes. This work will bring together a multidisciplinary team of earth scientists and earthquake engineers from the DOE national laboratory complex to develop advanced computational tools that will take full advantages of emerging, cutting-edge DOE computational platforms."
Watch the video: https://wp.me/p3RLHQ-ioE
Learn more: https://www.exascaleproject.org/advanced-simulations-for-earthquake-risk-assessment/
and
http://hpcuserforum.com
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Evolutionary generation and degeneration of randomness to assess the independ...David F. Barrero
Randomness tests are a key tool to assess the quality of pseudo-random and true random (physical) number generators. They exploit some properties of random numbers to quantify to which extent the observed behavior of the tested sequence approximates the expected one. Given the many sides of randomness, there is not an unique test providing the whole picture, it is needed instead a suite of tests assessing different aspects randomness. A robust test suite must include independent tests, otherwise tests would assess the same property, providing redundant information. This paper addresses the independence assessment of a popular test suite named Ent. To this end we generate a large number of pseudo-random numbers with different degrees of randomness by evolving them with a Genetic Algorithm. The numbers are generated to maximize their diversity attending different criteria based on Ent output, used as fitness. We encourage diversity by maximizing and minimizing randomness measures. Once a diverse set of pseudo-random numbers is generated, the Ent test suite is run on them, and their statistics studied by means of a classical correlation analysis. The results show high correlation among some statistics used in the literature, which could be overestimating the quality of their randomness source.
This paper is described intra-cell interference in case of multiuser access based on single-carrier frequency division multiple access signals (SC-FDMA), which is used in LTE networks for uplink control channel and virtual multiple-input multiple-output (V-MIMO) mode. It is shown the reason of intra-cell interference and performance degradation. Also it is discussed a gradient based solution for user alignment to minimize the interference.
Keywords — interference, LTE, SC-FDMA, Gauss-Newton, optimization, SIR, mitigation.
Improving Hardware Efficiency for DNN ApplicationsChester Chen
Speaker: Dr. Hai (Helen) Li is the Clare Boothe Luce Associate Professor of Electrical and Computer Engineering and Co-director of the Duke Center for Evolutionary Intelligence at Duke University
In this talk, I will introduce a few recent research spotlights by the Duke Center for Evolutionary Intelligence. The talk will start with the structured sparsity learning (SSL) method which attempts to learn a compact structure from a bigger DNN to reduce computation cost. It generates a regularized structure with high execution efficiency. Our experiments on CPU, GPU, and FPGA platforms show on average 3~5 times speedup of convolutional layer computation of AlexNet. Then, the implementation and acceleration of DNN applications on mobile computing systems will be introduced. MoDNN is a local distributed system which partitions DNN models onto several mobile devices to accelerate computations. ApesNet is an efficient pixel-wise segmentation network, which understands road scenes in real-time, and has achieved promising accuracy. Our prospects on the adoption of emerging technology will also be given at the end of this talk, offering the audiences an alternative thinking about the future evolution and revolution of modern computing systems.
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
Development of a family of products that satisfies different sectors of the market introduces significant challenges to today’s manufacturing industries – from development time to aftermarket services. A product family with a common platform paradigm offers a powerful solution to these daunting challenges. The Comprehensive Product Platform Planning (CP3) framework formulates a flexible product family model that (i) seeks to eliminate traditional boundaries between modular and scalable families, (ii) allows the formation of sub-families of products, and (iii) yield the optimal depth and number of platforms. In this paper, the CP3 framework introduces a solution strategy that obviates common assumptions; namely (i) the identification of platform/non-platform design variables and the determination of variable values are separate processes, and (ii) the cost reduction of creating product platforms is independent of the total number of each product manufactured. A new Cost Decay Function (CDF) is developed to approximate the reduction in cost with increasing commonalities among products, for a specified capacity of production. The Mixed Integer Non-Liner Programming (MINLP) problem, presented by the CP3 model, is solved using a novel Platform Segregating Mapping Function (PSMF). The proposed CP3 framework is implemented on a family of universal electric motors.
This is my Final Year Presentation about my research on fault tolerant Digital Filters.
The proposed scheme is a novel technique for the protection of digital FIR filters arranged in parallel. Digital filters are most often integrated in modern digital signal processing systems for the processing of signals. Number of integrated circuits used in signal processing as well as communication systems are becoming more circuitous and their productions are doubling, per year. It made all the systems more vulnerable towards Single Event Effects and this triggered the need for fault tolerant implementations.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
A genetic algorithm aimed at optimising seismic retrofitting of existing RC frames
1. Roberto FALCONE
Ciro FAELLA
Carmine LIMA
Enzo MARTINELLI
DICiv – Department of Civil Engineering, University of Salerno, IT
A Genetic Algorithm aimed at optimizing
seismic retrofitting of existing RC frames
2. OPENSEES DAYS
EUROPE 2017 2
Summary
1. Introduction
2. Problem statement and formulation
• Representation of individuals
• Seismic Analysis
• Evolution criteria
3. Sample application and computational efficiency
4. Conclusions
Carmine LIMA
clima@unisa.it
3. OPENSEES DAYS
EUROPE 2017 3
Introduction
“AS-BUILT ”
STRUCTURE
“MEMBER-LEVEL”
TECHNIQUES
“STRUCTURE-LEVEL”
TECHNIQUES
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
Carmine LIMA
clima@unisa.it
𝑔𝑔LS,i = 𝐶𝐶LS,𝑖𝑖 − 𝐷𝐷LS,𝑖𝑖 < 0
V
∆
V
∆ ∆
V
𝐷𝐷LS,𝑖𝑖: UNALTERED
𝐶𝐶LS,𝑖𝑖: INCREASED
𝐷𝐷LS,𝑖𝑖: REDUCED
𝐶𝐶LS,𝑖𝑖: UNALTEREDLimit State function
4. OPENSEES DAYS
EUROPE 2017
A generic intervention can be conceived as a combination of the “extreme” solutions with
the aim to obtain a synergistic action in increasing seismic capacity of under-designed
members and reducing demand on the whole structure.
“MEMBER-LEVEL” “STRUCTURE-LEVEL”
GENERIC
INTERVENTION
Introduction
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
4
𝐷𝐷LS,𝑖𝑖: REDUCED𝐶𝐶LS,𝑖𝑖: INCREASED
𝑔𝑔LS,i = 𝐶𝐶LS,𝑖𝑖 − 𝐷𝐷LS,𝑖𝑖 ≥ 0
Carmine LIMA
clima@unisa.it
Each one of the potentially
infinite combinations of
member- and structure-level
interventions leads to different:
• direct costs;
• life-cycle costs;
• reliability levels;
• other quantitative/
qualitative parameters
Choosing the “fittest" possible combination
of these two techniques is clearly a
problem of structural optimisation.
5. OPENSEES DAYS
EUROPE 2017
Within the present work the objective function is chosen to be
proportional to the total direct cost:
Ф is a penalty function intended at modifying the nominal cost of
intervention for those individuals that are not fit to comply with the
retrofitting objectives:
If x is the vector of design variables defining the generic intervention, the
optimal retrofitting solution may be determined by solving the following
constrained optimisation problem:
( ) ( ) ( ) ( )( )( ),maxloc glob LS i
i
f C C f g = + ⋅Φ x x x x
( )
( ),
arg min
0 ,LS i LS
f
g x i = 1...n
=
≥ ∀
x
x x
Problem statement and formulation
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
5
Carmine LIMA
clima@unisa.it
6. OPENSEES DAYS
EUROPE 2017
Generating initial
population
Evaluating
fitness
Is optimization
criteria met?
SELECTION
CROSSOVER
MUTATION
Best
Individual
Translate
New
population
IN
OUT
YES
NO
Generating a new population
FLOW CHART OF
GENETIC ALGORITHM
Problem statement and formulation
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
6
Carmine LIMA
clima@unisa.it
The first step of the procedure is the random generation of an initial population of Nind
individuals. Each individual x of such a population is represented through a simple
chromosome-like array of bits.
0 0 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0
(Ncol x 2bits)
“MEMBER-LEVEL” “STRUCTURE-LEVEL”
0 0 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0 1 1 0 0
(Ncol x 2bits) (Nbeam x 3bits)
7. OPENSEES DAYS
EUROPE 2017
In the second part, the procedure employs three bits for each bracing and, hence,
they are codified by only 23 or 8 possible phenotype solutions (from 0=absence
to 7=stiffest section) which identifying the section of steel bracings at the first
level.
In the first part, each couple of bits contains the number of FRP layers
employed for confining the corresponding column (ranging between zero
(as-built configuration) and 3 layers of FRP) and, hence, a total of 2xNcol bits
are allocated in the first part.
Confined Concrete
ε50h
ε0=0.002
εc
ε50c
0.2 f'
c
Unconfined Concrete
ε20c
θ
0.5 f'
c
ε50u
f'
c
fc
0 0 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0
(Ncol x 2bits)
“MEMBER-LEVEL” “STRUCTURE-LEVEL”Model by Kent and Park
0 0 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0 1 1 0 0
(Ncol x 2bits) (Nbeam x 3bits)
0 2 3 1 2 3 1 3 0 2 1 0
DECODING
DECODING
N° OF FRP LAYERS ID SECTION
6 5 5 4
Representation of individuals
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
7
Carmine LIMA
clima@unisa.it
W
A
A
A
3
1
1
3
W
W
A
2
2
, 1
1
=
=
⋅
= ⋅
⋅
∑
∑
n
j j
j k
k des n
i i
i
h W
A A
h W
A
z
The relationship between the section
of steel members at upper floors and
the section of steel bracings at the first
level stem out of a consistent design
criterion:
8. OPENSEES DAYS
EUROPE 2017
V*/m*
∆*
BaseshearV
Top displacement ∆
For a given “individual” x of the population the value of function gLS,i is
simply evaluated through a Static Non Linear Analysis.
V
∆
𝐷𝐷𝑆𝑆𝑆𝑆𝑆𝑆=∆∗ � 𝛤𝛤∗
Elastic ADRS
Inelastic ADRS
𝐶𝐶𝑆𝑆𝑆𝑆𝑆𝑆
𝑔𝑔LS,i = 𝐶𝐶LS,𝑖𝑖 − 𝐷𝐷LS,𝑖𝑖 ≥ 0 ?
Seismic Analysis of individuals
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
8
Carmine LIMA
clima@unisa.it
9. The selection operator is used to select “parents” among a mating pool
solutions according to their fitness.
The third operation is mutation that allows for the possibility that non-
existing features from both parent strings may be created and passed to
their children.
The second operator, crossover, combines segments of selected strings.
OPENSEES DAYS
EUROPE 2017
Once the total cost of all Nind individuals and their objective function gLS,i are evaluated, the
genetic algorithm evolves through three operators until the counter of population reaches a
maximum fixed number.
Is stopping
criteria met?
SELECTION
CROSSOVER
MUTATION
NO
Generating a new population
Individual Chromosomes Probability Cumulative P.
1 [010101010101100011001110] 0.24 0.24
2 [110100011101101001011000] 0.08 0.32
3 [010001110101101011000110] 0.39 0.71
4 [000101010101100011001010] 0.10 0.81
5 [010111010000101010001111] 0.19 1.00
1
24%
2
8%
3
39%
4
10%
5
19%
According the so-called “roulette-wheel” rule
the string characterized by higher fitness value
and, hence, a wider range in the cumulative
probability values has a higher probability of
being selected.
Crossover operator combines segments of
selected strings into new “offspring” solutions
by exchanging their genetic information
between successive crossover points (“multi-
point” crossover):
Old chromosome 1 0 1 0 1 1 0 0
Random numbers 0.001 0.073 0.325 0.024 0.802 0.001 0.023 0.004
New chromosome 0 0 1 0 1 0 0 0
Mutation sweeps down the string of bits and
changes the bit from 0 to 1 or otherwise if a
fixed probability test is passed. It helps to avoid
getting trapped at local optima.
SELECTIONCROSSOVERMUTATION
Counter=150?
Evolution criteria
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
9
Carmine LIMA
clima@unisa.it
( ) ( ) ( ) ( )( )( ),maxloc glob LS i
i
f C C f g = + ⋅Φ x x x x( ), 0 ,LS i LSg x i = 1...n≥ ∀
10. OPENSEES DAYS
EUROPE 2017
A simple 3D three-storey RC frame is taken as preliminary example in
order to show the potential and the working detail of the presented
optimization algorithm.
Sample application and computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
10
Both the LS of Life Safety (SLV) and
Damage Limitation (SLD) are considered
X
Y
Carmine LIMA
clima@unisa.it
11. OPENSEES DAYS
EUROPE 2017
Computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
11
Carmine LIMA
clima@unisa.it
Current research is devoted at enhancing the computational efficiency of the programming
code.
NLS analysis of 50 individuals x 150 generations x 8 pushover analysis (for each individual
according to EC8) = 60’000 Analysis
Almost 94% of the total computational time of the procedure is taken by seismic analysis in
OpenSEES, whereas the 6% only refers to pre- and post-processing operations handled by
the genetic algorithm
nonlinearity of RC members
steel bracings
floor diaphragms
Distributed plasticity (nonlinearbeamcolumn)
Concentrate plasticity (BeamWithHinges)
Accidental eccentricity
central node connectionnonlinearbeamcolumn
Buckling effect
without
Equivalent trusses
Shell elements
12. OPENSEES DAYS
EUROPE 2017
Computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
12
Carmine LIMA
clima@unisa.it
nonlinearity of RC members
Distributed plasticity (nonlinearbeamcolumn)
Concentrate plasticity (BeamWithHinges)
BeamWithHinges
(time for performing
analyses)
40% lower
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
0 50 100 150 200 250 300 350
Vb [N]
Dtop [mm]
13. OPENSEES DAYS
EUROPE 2017
Computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
13
Carmine LIMA
clima@unisa.it
Accidental eccentricity
central node connectionnonlinearbeamcolumn
Buckling effectsteel bracings
3D
Plain view
Initial eccentricity
3D
Plain view
14. OPENSEES DAYS
EUROPE 2017
Computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
14
Carmine LIMA
clima@unisa.it
Accidental eccentricity
central node connectionnonlinearbeamcolumn
Buckling effectsteel bracings
0
100
200
300
400
500
600
700
800
900
0 50 100 150 200
N(kN)
force-diplacement of steel bracings
tension
compression
15. OPENSEES DAYS
EUROPE 2017
Computational efficiency
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
15
Carmine LIMA
clima@unisa.it
without
Equivalent trusses
Shell elements
floor diaphragms
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
0 50 100 150 200 250 300 350
Vb [N]
Dtop [mm]
PushOver Curve X
PushOver Curve Y
16. OPENSEES DAYS
EUROPE 2017
The optimal solution came up to consist of a concentric steel bracing
(realised in the two plain frame along the x- and y-direction) and no local
FRP interventions are actually required
MOST APPROPRIATE
SOLUTION
TRANSLATEBEST
INDIVIDUAL
TRANSLATEBEST
INDIVIDUAL
Results
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
16
Carmine LIMA
clima@unisa.it
17. OPENSEES DAYS
EUROPE 2017
• The proposed procedure has the potential to support engineering
judgement in determining the “fittest” seismic retrofitting solution for
RC frames.
• Future developments are intended at including the aspects that are
not taken into account yet (i.e., analysis of real structures, Limit States,
indirect costs, multi-criteria objective function, among the others).
• Future developments should also aim at enhancing the computational
efficiency of the computer procedure, whose computational cost is one
of the main critical issues to be duly addressed for the proposed
method be actually feasible in real applications.
Conclusions
INTRODUCTION
PROBLEM STATEMENT
AND FORMULATION
SAMPLE APPLICATION CONCLUSIONS
17
Carmine LIMA
clima@unisa.it