This 3 sentence summary provides the high level information from the document:
The document discusses integration schemes for fast hybrid testing used at the University of Colorado's NEES facility. It describes how an unconditionally stable implicit integration method, specifically the α method, is used to achieve real-time or close to real-time earthquake simulations through a constrained implementation. Key aspects of the integration scheme include maintaining displacement continuity and force equilibrium between the numerical and experimental components throughout the numerical integration process.
Robust and efficient nonlinear structural analysis using the central differen...openseesdays
This document compares the central difference time integration scheme to the traditional Newmark average acceleration scheme for nonlinear structural analysis. The central difference scheme is explicit, does not require iteration, and is well-suited for parallelization. Incremental dynamic analyses using the central difference scheme converged for more ground motions and estimated a 10% higher median collapse capacity compared to the Newmark scheme. Analyses using the central difference scheme were also up to 5 times faster when run in parallel on high-performance computers. Parallel algorithms were developed to efficiently conduct multiple stripe analyses and incremental dynamic analyses using computational resources.
Effect of Residual Modes on Dynamically Condensed Spacecraft StructureIRJET Journal
This document discusses the effect of residual modes on the fundamental frequencies of a condensed spacecraft structure. It presents the modeling and dynamic analysis of a spacecraft bus structure using finite element analysis. The structure is condensed using the Craig-Bampton method to reduce the degrees of freedom. Residual modes are then computed and included to recover data lost during condensation. The results show that including residual modes provides frequencies for the condensed structure that closely match those of the original full structure model, demonstrating the effectiveness of using residual modes for data recovery after structural condensation.
2015 New trans-stilbene derivatives with large TPA valuesvarun Kundi
This document discusses a theoretical study of the linear and non-linear optical properties of 13 new trans-stilbene derivatives designed to have large two-photon absorption cross-sections. The study uses density functional theory and time-dependent density functional theory calculations with the CAM-B3LYP functional to evaluate properties like hyperpolarizability and one- and two-photon absorption. It finds that derivatives TSBD-10, TSBD-11, TSBD-12, and TSBD-13 have particularly large non-linear optical susceptibilities and two-photon absorption cross-sections, with the largest being 5560 GM for TSBD-13.
Intelligent back analysis using data from the instrument (poster)Hamed Zarei
This document presents a model using artificial neural networks for back analysis of tunnel monitoring data from the Chehel Chai water conveyance tunnel in Iran. Input data from 27 parameters across 3 categories were used to train a neural network model on results from 18 convergence stations. The trained model was then able to accurately estimate rock mass elasticity and in situ stress values based on new monitoring data, demonstrating its effectiveness for intelligent back analysis of future tunnel monitoring results.
This document describes a numerical simulation of the dynamics of a tethered buoy system. It proposes a novel mixed finite element formulation to model the elastic cable in a robust way, even when the Young's modulus is very large. It also uses quaternion variables to describe the floating body's dynamics, providing numerical stability during large rotations. The coupled nonlinear equations governing the cable and body are discretized in time using the implicit Backward Euler method and linearized with a damped Newton's method. Validation simulations are presented to demonstrate the accuracy and robustness of the overall numerical procedure.
Numerical Simulation of Gaseous Microflows by Lattice Boltzmann MethodIDES Editor
This work is concerned with application of the
Lattice Boltzmznn Method (LBM) to compute flows in microgeometries.
The choice of using LBM for microflow simulation
is a good one owing to the fact that it is based on the Boltzmann
equation which is valid for the whole range of the Knudsen
number. In this work LBM is applied to simulate the pressure
driven microchannel flows and micro lid-driven cavity flows.
First, the microchannel flow is studied in some details with
the effects of varying the Knudsen number, pressure ratio
and Tangential Momemtum Accomodation Coefficient
(TMAC). The pressure distribution and other parameters are
compared with available experimental and analytical data
with good agreement. After having thus established the
credibility of the code and the method including boundary
conditions, LBM is then used to investigate the micro liddriven
cavity flow. The computations are carried out mainly
for the slip regime and the threshold of the transition regime.
The document summarizes the response spectrum method of analysis for evaluating seismic design forces on structures. It discusses that the method converts a dynamic analysis into a partial dynamic and partial static analysis. Key steps include performing a modal analysis to obtain mode shapes and frequencies, using the acceleration response spectrum to derive equivalent static loads for each vibration mode, and combining modal responses using various rules to obtain the total maximum structural response. The method provides an approximate but effective technique for seismic analysis of structures.
Hyperspectral unmixing using novel conversion model.pptgrssieee
The document presents a novel hyperspectral unmixing approach called uccm-SVM that converts the abundance quantification problem into a classification problem using support vector machines. The approach is tested on both simulated and real hyperspectral images and is shown to outperform traditional mean-based techniques like FCLS in terms of accuracy while having lower computational costs for smaller training set sizes. Future work to improve the method includes enhancing performance while reducing computation for larger training sets.
Robust and efficient nonlinear structural analysis using the central differen...openseesdays
This document compares the central difference time integration scheme to the traditional Newmark average acceleration scheme for nonlinear structural analysis. The central difference scheme is explicit, does not require iteration, and is well-suited for parallelization. Incremental dynamic analyses using the central difference scheme converged for more ground motions and estimated a 10% higher median collapse capacity compared to the Newmark scheme. Analyses using the central difference scheme were also up to 5 times faster when run in parallel on high-performance computers. Parallel algorithms were developed to efficiently conduct multiple stripe analyses and incremental dynamic analyses using computational resources.
Effect of Residual Modes on Dynamically Condensed Spacecraft StructureIRJET Journal
This document discusses the effect of residual modes on the fundamental frequencies of a condensed spacecraft structure. It presents the modeling and dynamic analysis of a spacecraft bus structure using finite element analysis. The structure is condensed using the Craig-Bampton method to reduce the degrees of freedom. Residual modes are then computed and included to recover data lost during condensation. The results show that including residual modes provides frequencies for the condensed structure that closely match those of the original full structure model, demonstrating the effectiveness of using residual modes for data recovery after structural condensation.
2015 New trans-stilbene derivatives with large TPA valuesvarun Kundi
This document discusses a theoretical study of the linear and non-linear optical properties of 13 new trans-stilbene derivatives designed to have large two-photon absorption cross-sections. The study uses density functional theory and time-dependent density functional theory calculations with the CAM-B3LYP functional to evaluate properties like hyperpolarizability and one- and two-photon absorption. It finds that derivatives TSBD-10, TSBD-11, TSBD-12, and TSBD-13 have particularly large non-linear optical susceptibilities and two-photon absorption cross-sections, with the largest being 5560 GM for TSBD-13.
Intelligent back analysis using data from the instrument (poster)Hamed Zarei
This document presents a model using artificial neural networks for back analysis of tunnel monitoring data from the Chehel Chai water conveyance tunnel in Iran. Input data from 27 parameters across 3 categories were used to train a neural network model on results from 18 convergence stations. The trained model was then able to accurately estimate rock mass elasticity and in situ stress values based on new monitoring data, demonstrating its effectiveness for intelligent back analysis of future tunnel monitoring results.
This document describes a numerical simulation of the dynamics of a tethered buoy system. It proposes a novel mixed finite element formulation to model the elastic cable in a robust way, even when the Young's modulus is very large. It also uses quaternion variables to describe the floating body's dynamics, providing numerical stability during large rotations. The coupled nonlinear equations governing the cable and body are discretized in time using the implicit Backward Euler method and linearized with a damped Newton's method. Validation simulations are presented to demonstrate the accuracy and robustness of the overall numerical procedure.
Numerical Simulation of Gaseous Microflows by Lattice Boltzmann MethodIDES Editor
This work is concerned with application of the
Lattice Boltzmznn Method (LBM) to compute flows in microgeometries.
The choice of using LBM for microflow simulation
is a good one owing to the fact that it is based on the Boltzmann
equation which is valid for the whole range of the Knudsen
number. In this work LBM is applied to simulate the pressure
driven microchannel flows and micro lid-driven cavity flows.
First, the microchannel flow is studied in some details with
the effects of varying the Knudsen number, pressure ratio
and Tangential Momemtum Accomodation Coefficient
(TMAC). The pressure distribution and other parameters are
compared with available experimental and analytical data
with good agreement. After having thus established the
credibility of the code and the method including boundary
conditions, LBM is then used to investigate the micro liddriven
cavity flow. The computations are carried out mainly
for the slip regime and the threshold of the transition regime.
The document summarizes the response spectrum method of analysis for evaluating seismic design forces on structures. It discusses that the method converts a dynamic analysis into a partial dynamic and partial static analysis. Key steps include performing a modal analysis to obtain mode shapes and frequencies, using the acceleration response spectrum to derive equivalent static loads for each vibration mode, and combining modal responses using various rules to obtain the total maximum structural response. The method provides an approximate but effective technique for seismic analysis of structures.
Hyperspectral unmixing using novel conversion model.pptgrssieee
The document presents a novel hyperspectral unmixing approach called uccm-SVM that converts the abundance quantification problem into a classification problem using support vector machines. The approach is tested on both simulated and real hyperspectral images and is shown to outperform traditional mean-based techniques like FCLS in terms of accuracy while having lower computational costs for smaller training set sizes. Future work to improve the method includes enhancing performance while reducing computation for larger training sets.
Optimal and Power Aware BIST for Delay Testing of System-On-ChipIDES Editor
Test engineering for fault tolerant VLSI systems is
encumbered with optimization requisites for hardware
overhead, test power and test time. The high level quality of
these complex high-speed VLSI circuits can be assured only
through delay testing, which involves checking for accurate
temporal behavior. In the present paper, a data-path based
built-in test pattern generator (TPG) that generates iterative
pseudo-exhaustive two-patterns (IPET) for parallel delay
testing of modules with different input cone capacities is
implemented. Further, in the present study a CMOS
implementation of low power architecture (LPA) for scan based
built-in self test (BIST) for delay testing and combinational
testing is carried out. This reduces test power dissipation in
the circuit under test (CUT). Experimental results and
comparisons with pre-existing methods prove the reduction
in hardware overhead and test-time.
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...ijaia
The present paper describes an improved 4 DOF (x/y/z/yaw) vision based positioning solution for fully 6
DOF autonomous UAVs, optimised in terms of computation and development costs as well as robustness
and performance. The positioning system combines Fourier transform-based image registration (Fourier
Tracking) and differential optical flow computation to overcome the drawbacks of a single approach. The
first method is capable of recognizing movement in four degree of freedom under variable lighting conditions, but suffers from low sample rate and high computational costs. Differential optical flow computation, on the other hand, enables a very high sample rate to gain control robustness. This method, however, is limited to translational movement only and performs poor in bad lighting conditions. A reliable positioning system for autonomous flights with free heading is obtained by fusing both techniques. Although the vision system can measure the variable altitude during flight, infrared and ultrasonic sensors are used for robustness. This work is part of the AQopterI8 project, which aims to develop an autonomous
flying quadrocopter for indoor application and makes autonomous directed flight possible.
Impact of Partial Demand Increase on the Performance of IP Networks and Re-op...EM Legacy
This document discusses communication networks and the impact of partial demand increases on network performance. It begins by explaining intra-domain IP routing and shortest path routing approaches. It then motivates investigating how partial increases in demand affect network performance metrics and if re-optimization of routing is needed. The document outlines an approach for modeling this problem and evaluating re-optimization methods like partial least squares and simulated annealing to minimize maximum link utilization while limiting routing changes. The results suggest partial least squares performs better at successfully re-optimizing networks in response to demand increases with fewer routing changes.
Analysis of pavement management activities programming by particle swarm opti...IDES Editor
The document analyzes the use of particle swarm optimization (PSO) to program pavement maintenance activities at the network level for a pavement management system (PMS). PSO is shown to be effective at handling the highly constrained problem of programming pavement maintenance activities. The paper presents a novel model using PSO for PMS and determines the optimal rehabilitation activity and costs for a hypothetical road network consisting of 15 segments over 15 years. Numerical examples demonstrate the trade-offs between rehabilitation and maintenance costs under different cost functions. The results show that PSO is suitable for optimizing pavement management at the network level in PMS.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
Process optimization and process adjustment techniques for improvingIAEME Publication
This document discusses techniques for optimizing manufacturing processes and adjusting processes to improve quality. It begins by describing traditional process optimization and adjustment methods using statistical process control (SPC) charts. However, SPC alone does not provide explicit process adjustment. The document then evaluates stochastic approximation and Kalman filter approaches for online process adjustment to minimize quality deviations more efficiently. It proposes applying these techniques to setup adjustment problems and compares their performance to traditional methods using small sample analysis.
Compit 2013 - Torsional Vibrations under Ice ImpactSimulationX
- The document discusses bridging the gap between steady-state and transient simulation for torsional vibrations under ice impact.
- It introduces modeling methods that allow both transient and steady-state analysis to operate on the same model base using a unified framework based on ordinary differential equations.
- It also discusses propeller modeling that incorporates established steady-state and transient methods and is being certified by classification societies for compliance with ice class simulation requirements.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
Signature PSO: A novel inertia weight adjustment using fuzzy signature for LQ...journalBEEI
Particle swarm optimization (PSO) is an optimization algorithm that is simple and reliable to complete optimization. The balance between exploration and exploitation of PSO searching characteristics is maintained by inertia weight. Since this parameter has been introduced, there have been several different strategies to determine the inertia weight during a train of the run. This paper describes the method of adjusting the inertia weights using fuzzy signatures called signature PSO. Some parameters were used as a fuzzy signature variable to represent the particle situation in a run. The implementation to solve the tuning problem of linear quadratic regulator (LQR) control parameters is also presented in this paper. Another weight adjustment strategy is also used as a comparison in performance evaluation using an integral time absolute error (ITAE). Experimental results show that signature PSO was able to give a good approximation to the optimum control parameters of LQR in this case.
The document describes AFMM, a program for parametrizing molecular mechanics force fields. AFMM iteratively optimizes force field parameters to fit normal modes from quantum chemical calculations. It minimizes a merit function considering both vibrational frequencies and eigenvector projections. The program uses a Monte Carlo algorithm to refine parameters and improve the fit to reference quantum data, replacing manual parametrization.
A Closed-form Solution to Photorealistic Image StylizationSherozbekJumaboev
This document presents a new method for photorealistic image stylization. The method has two steps: 1) A stylization step called PhotoWCT that transfers the style of a reference photo to the content photo. 2) A smoothing step that ensures spatially consistent stylizations by reducing artifacts in semantically similar regions. The key aspects of the PhotoWCT method are that it uses an encoder-decoder network with unpooling layers instead of upsampling to better preserve spatial information, and it runs in closed-form instead of optimization-based like other methods. Experiments show the method generates higher quality stylizations faster than state-of-the-art techniques with fewer artifacts.
Nonlinear combination of intensity measures for response prediction of RC bui...openseesdays
This document discusses using nonlinear combinations of intensity measures (IMs) to more accurately predict engineering demand parameters (EDPs) for response analysis of reinforced concrete (RC) buildings. It presents an evolutionary polynomial regression (EPR) technique to model complex nonlinear relationships between IMs and EDPs without assumptions about the form of the relationship. The EPR technique is applied to dynamic analyses of an RC framed building to predict maximum inter-story drift ratio and maximum floor acceleration under earthquake ground motions, demonstrating more accurate predictions compared to a single IM, especially for base-isolated buildings. The document concludes more accurate EDP predictions can be obtained through nonlinear IM combinations and advocates improving data-driven modeling before proposing new IMs.
It was presented at the Dept. Of Atmospheric Sciences for the award of M.Tech degree. It is all about the research in high resolution ARW model for tropical cyclones simulations.
FINITE ELEMENT ANALYSIS OF RIGID PAVEMENT USING EVERFE2.24& COMPARISION OF RE...civej
In this study analysis of plain cement concrete pavement was done with 3-D mechanistic FEM computer
programme EVERFE2.24. This programme was developed by Bill David, University of Maine,USA. Rigid
pavement is modelled as a flat slab with DLC as base course and subgrade beneath it.
Stresses in rigid pavement at critical location was calculated due to combined effect of axle load and
environmental factor.These results are compared with IRC58-2015&2002.The disparity between results
are analysed and plotted on graph.
This study finds that stressesgiven by IRC58-2015 is up to 42% less than that given by IRC58-2002, and
stresses given by EverFE2.24 is nearly same as given by IRC58-2002.italso highlighted some issues related
to new code of design i.e. IRC58-2015.
This document introduces the fuzzy model reference learning control (FMRLC) method. FMRLC uses a reference model to provide feedback to modify the membership functions of a fuzzy controller. This allows the closed-loop system to behave like the reference model and achieve the desired performance. The effectiveness of FMRLC is demonstrated through its application to rocket velocity control and robot manipulator control. FMRLC can achieve high performance learning control for nonlinear, time-varying systems.
Validation of Polarization angles Based Resonance Modes IJERA Editor
The symmetry, tilt and elongation degrees are figures of merit which can be used to describe the radar target
shape once incorporated with the target resonance modes. Through optimization of the second moments of the
quadrature-polarized residues matrix, the angles are determined by the optimum co-null polarization states. The
approach is tested and validated against low signal-to-noise ratio and also the late-time onset selection when
extracting the mode set. A wire plane model is used and the results show that with ensemble averaging it
possible to have robust polarization angle set, even with small number of sample set
Particle Swarm Optimization for the Path Loss Reduction in Suburban and Rural...IJECEIAES
In the present work, a precise optimization method is proposed for tuning the parameters of the COST231 model to improve its accuracy in the path loss propagation prediction. The Particle Swarm Optimization is used to tune the model parameters. The predictions of the tuned model are compared with the most popular models. The performance criteria selected for the comparison of various empirical path loss models is the Root Mean Square Error (RMSE). The RMSE between the actual and predicted data are calculated for various path loss models. It turned out that the tuned COST 231 model outperforms the other studied models.
1) The document discusses ground excited systems, where the dynamic equations of motion are derived based on the relative displacement of the structure with respect to the ground acceleration vector.
2) Modal superposition is applied to decompose the equations into uncoupled modal equations, which are then solved to obtain the system response in terms of maximum displacements, storey shears, moments and drifts.
3) Several modal combination rules are discussed to combine the individual modal responses, including SRSS, CQC and double sum methods.
Hybrid of Ant Colony Optimization and Gravitational Emulation Based Load Bala...IRJET Journal
This document proposes a hybrid Ant Colony Optimization (ACO) and Gravitational Emulation Local Search (GELS) algorithm for load balancing in cloud computing. ACO is combined with GELS to take advantage of both algorithms - ACO is good for global search using pheromone trails while GELS is powerful for local search based on gravitational attraction. The hybrid algorithm is tested using CloudSim and shows improvements over existing algorithms like GA-GELS in metrics like resource utilization, makespan, and load balancing level.
The document describes the Model Induced Metropolis-Hastings (MIMH) algorithm for efficiently sampling from high-performance regions of costly objective functions. MIMH performs Metropolis-Hastings random walks on a radial basis function network (RBFN) model of the objective function. After each walk, the endpoint is added to the RBFN training set to improve the model. Experiments show MIMH finds good solutions with significantly fewer objective function evaluations than other algorithms like Niching ES, and the number of evaluations can be reduced further by raising the acceptance probability exponent. MIMH provides an effective way to identify high-performance regions at low cost for initializing more greedy optimization methods.
Dillon Hegarty 4.4 Professional Persona ProjectDillon Hegarty
Dillon Hegarty is an audio engineer and creative individual who owns Matchless Studios. He is committed to providing quality recordings through programs like Pro Tools, Logic Pro, and Cubase. His goal is to make the recording process enjoyable and rewarding for clients, helping to bring their ideas to life and create something they are proud to release. He strives for continuous self-improvement through experiences like touring with bands, completing internships, and attending Full Sail University.
Actividad 4.1 (principales representantes del conductismo y constructivismo)Lenin Canduelas
El documento resume las biografías y aportes de cuatro importantes figuras en psicología y educación: John B. Watson, fundador del conductismo; Burrhus Frederic Skinner y sus investigaciones sobre el refuerzo; Iván Pavlov y su trabajo sobre los reflejos condicionados; y Jean Piaget y su teoría constructivista sobre el desarrollo cognitivo infantil. Explica brevemente sus principales ideas y contribuciones al campo de la educación.
Optimal and Power Aware BIST for Delay Testing of System-On-ChipIDES Editor
Test engineering for fault tolerant VLSI systems is
encumbered with optimization requisites for hardware
overhead, test power and test time. The high level quality of
these complex high-speed VLSI circuits can be assured only
through delay testing, which involves checking for accurate
temporal behavior. In the present paper, a data-path based
built-in test pattern generator (TPG) that generates iterative
pseudo-exhaustive two-patterns (IPET) for parallel delay
testing of modules with different input cone capacities is
implemented. Further, in the present study a CMOS
implementation of low power architecture (LPA) for scan based
built-in self test (BIST) for delay testing and combinational
testing is carried out. This reduces test power dissipation in
the circuit under test (CUT). Experimental results and
comparisons with pre-existing methods prove the reduction
in hardware overhead and test-time.
COMPLEMENTARY VISION BASED DATA FUSION FOR ROBUST POSITIONING AND DIRECTED FL...ijaia
The present paper describes an improved 4 DOF (x/y/z/yaw) vision based positioning solution for fully 6
DOF autonomous UAVs, optimised in terms of computation and development costs as well as robustness
and performance. The positioning system combines Fourier transform-based image registration (Fourier
Tracking) and differential optical flow computation to overcome the drawbacks of a single approach. The
first method is capable of recognizing movement in four degree of freedom under variable lighting conditions, but suffers from low sample rate and high computational costs. Differential optical flow computation, on the other hand, enables a very high sample rate to gain control robustness. This method, however, is limited to translational movement only and performs poor in bad lighting conditions. A reliable positioning system for autonomous flights with free heading is obtained by fusing both techniques. Although the vision system can measure the variable altitude during flight, infrared and ultrasonic sensors are used for robustness. This work is part of the AQopterI8 project, which aims to develop an autonomous
flying quadrocopter for indoor application and makes autonomous directed flight possible.
Impact of Partial Demand Increase on the Performance of IP Networks and Re-op...EM Legacy
This document discusses communication networks and the impact of partial demand increases on network performance. It begins by explaining intra-domain IP routing and shortest path routing approaches. It then motivates investigating how partial increases in demand affect network performance metrics and if re-optimization of routing is needed. The document outlines an approach for modeling this problem and evaluating re-optimization methods like partial least squares and simulated annealing to minimize maximum link utilization while limiting routing changes. The results suggest partial least squares performs better at successfully re-optimizing networks in response to demand increases with fewer routing changes.
Analysis of pavement management activities programming by particle swarm opti...IDES Editor
The document analyzes the use of particle swarm optimization (PSO) to program pavement maintenance activities at the network level for a pavement management system (PMS). PSO is shown to be effective at handling the highly constrained problem of programming pavement maintenance activities. The paper presents a novel model using PSO for PMS and determines the optimal rehabilitation activity and costs for a hypothetical road network consisting of 15 segments over 15 years. Numerical examples demonstrate the trade-offs between rehabilitation and maintenance costs under different cost functions. The results show that PSO is suitable for optimizing pavement management at the network level in PMS.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
Process optimization and process adjustment techniques for improvingIAEME Publication
This document discusses techniques for optimizing manufacturing processes and adjusting processes to improve quality. It begins by describing traditional process optimization and adjustment methods using statistical process control (SPC) charts. However, SPC alone does not provide explicit process adjustment. The document then evaluates stochastic approximation and Kalman filter approaches for online process adjustment to minimize quality deviations more efficiently. It proposes applying these techniques to setup adjustment problems and compares their performance to traditional methods using small sample analysis.
Compit 2013 - Torsional Vibrations under Ice ImpactSimulationX
- The document discusses bridging the gap between steady-state and transient simulation for torsional vibrations under ice impact.
- It introduces modeling methods that allow both transient and steady-state analysis to operate on the same model base using a unified framework based on ordinary differential equations.
- It also discusses propeller modeling that incorporates established steady-state and transient methods and is being certified by classification societies for compliance with ice class simulation requirements.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
Signature PSO: A novel inertia weight adjustment using fuzzy signature for LQ...journalBEEI
Particle swarm optimization (PSO) is an optimization algorithm that is simple and reliable to complete optimization. The balance between exploration and exploitation of PSO searching characteristics is maintained by inertia weight. Since this parameter has been introduced, there have been several different strategies to determine the inertia weight during a train of the run. This paper describes the method of adjusting the inertia weights using fuzzy signatures called signature PSO. Some parameters were used as a fuzzy signature variable to represent the particle situation in a run. The implementation to solve the tuning problem of linear quadratic regulator (LQR) control parameters is also presented in this paper. Another weight adjustment strategy is also used as a comparison in performance evaluation using an integral time absolute error (ITAE). Experimental results show that signature PSO was able to give a good approximation to the optimum control parameters of LQR in this case.
The document describes AFMM, a program for parametrizing molecular mechanics force fields. AFMM iteratively optimizes force field parameters to fit normal modes from quantum chemical calculations. It minimizes a merit function considering both vibrational frequencies and eigenvector projections. The program uses a Monte Carlo algorithm to refine parameters and improve the fit to reference quantum data, replacing manual parametrization.
A Closed-form Solution to Photorealistic Image StylizationSherozbekJumaboev
This document presents a new method for photorealistic image stylization. The method has two steps: 1) A stylization step called PhotoWCT that transfers the style of a reference photo to the content photo. 2) A smoothing step that ensures spatially consistent stylizations by reducing artifacts in semantically similar regions. The key aspects of the PhotoWCT method are that it uses an encoder-decoder network with unpooling layers instead of upsampling to better preserve spatial information, and it runs in closed-form instead of optimization-based like other methods. Experiments show the method generates higher quality stylizations faster than state-of-the-art techniques with fewer artifacts.
Nonlinear combination of intensity measures for response prediction of RC bui...openseesdays
This document discusses using nonlinear combinations of intensity measures (IMs) to more accurately predict engineering demand parameters (EDPs) for response analysis of reinforced concrete (RC) buildings. It presents an evolutionary polynomial regression (EPR) technique to model complex nonlinear relationships between IMs and EDPs without assumptions about the form of the relationship. The EPR technique is applied to dynamic analyses of an RC framed building to predict maximum inter-story drift ratio and maximum floor acceleration under earthquake ground motions, demonstrating more accurate predictions compared to a single IM, especially for base-isolated buildings. The document concludes more accurate EDP predictions can be obtained through nonlinear IM combinations and advocates improving data-driven modeling before proposing new IMs.
It was presented at the Dept. Of Atmospheric Sciences for the award of M.Tech degree. It is all about the research in high resolution ARW model for tropical cyclones simulations.
FINITE ELEMENT ANALYSIS OF RIGID PAVEMENT USING EVERFE2.24& COMPARISION OF RE...civej
In this study analysis of plain cement concrete pavement was done with 3-D mechanistic FEM computer
programme EVERFE2.24. This programme was developed by Bill David, University of Maine,USA. Rigid
pavement is modelled as a flat slab with DLC as base course and subgrade beneath it.
Stresses in rigid pavement at critical location was calculated due to combined effect of axle load and
environmental factor.These results are compared with IRC58-2015&2002.The disparity between results
are analysed and plotted on graph.
This study finds that stressesgiven by IRC58-2015 is up to 42% less than that given by IRC58-2002, and
stresses given by EverFE2.24 is nearly same as given by IRC58-2002.italso highlighted some issues related
to new code of design i.e. IRC58-2015.
This document introduces the fuzzy model reference learning control (FMRLC) method. FMRLC uses a reference model to provide feedback to modify the membership functions of a fuzzy controller. This allows the closed-loop system to behave like the reference model and achieve the desired performance. The effectiveness of FMRLC is demonstrated through its application to rocket velocity control and robot manipulator control. FMRLC can achieve high performance learning control for nonlinear, time-varying systems.
Validation of Polarization angles Based Resonance Modes IJERA Editor
The symmetry, tilt and elongation degrees are figures of merit which can be used to describe the radar target
shape once incorporated with the target resonance modes. Through optimization of the second moments of the
quadrature-polarized residues matrix, the angles are determined by the optimum co-null polarization states. The
approach is tested and validated against low signal-to-noise ratio and also the late-time onset selection when
extracting the mode set. A wire plane model is used and the results show that with ensemble averaging it
possible to have robust polarization angle set, even with small number of sample set
Particle Swarm Optimization for the Path Loss Reduction in Suburban and Rural...IJECEIAES
In the present work, a precise optimization method is proposed for tuning the parameters of the COST231 model to improve its accuracy in the path loss propagation prediction. The Particle Swarm Optimization is used to tune the model parameters. The predictions of the tuned model are compared with the most popular models. The performance criteria selected for the comparison of various empirical path loss models is the Root Mean Square Error (RMSE). The RMSE between the actual and predicted data are calculated for various path loss models. It turned out that the tuned COST 231 model outperforms the other studied models.
1) The document discusses ground excited systems, where the dynamic equations of motion are derived based on the relative displacement of the structure with respect to the ground acceleration vector.
2) Modal superposition is applied to decompose the equations into uncoupled modal equations, which are then solved to obtain the system response in terms of maximum displacements, storey shears, moments and drifts.
3) Several modal combination rules are discussed to combine the individual modal responses, including SRSS, CQC and double sum methods.
Hybrid of Ant Colony Optimization and Gravitational Emulation Based Load Bala...IRJET Journal
This document proposes a hybrid Ant Colony Optimization (ACO) and Gravitational Emulation Local Search (GELS) algorithm for load balancing in cloud computing. ACO is combined with GELS to take advantage of both algorithms - ACO is good for global search using pheromone trails while GELS is powerful for local search based on gravitational attraction. The hybrid algorithm is tested using CloudSim and shows improvements over existing algorithms like GA-GELS in metrics like resource utilization, makespan, and load balancing level.
The document describes the Model Induced Metropolis-Hastings (MIMH) algorithm for efficiently sampling from high-performance regions of costly objective functions. MIMH performs Metropolis-Hastings random walks on a radial basis function network (RBFN) model of the objective function. After each walk, the endpoint is added to the RBFN training set to improve the model. Experiments show MIMH finds good solutions with significantly fewer objective function evaluations than other algorithms like Niching ES, and the number of evaluations can be reduced further by raising the acceptance probability exponent. MIMH provides an effective way to identify high-performance regions at low cost for initializing more greedy optimization methods.
Dillon Hegarty 4.4 Professional Persona ProjectDillon Hegarty
Dillon Hegarty is an audio engineer and creative individual who owns Matchless Studios. He is committed to providing quality recordings through programs like Pro Tools, Logic Pro, and Cubase. His goal is to make the recording process enjoyable and rewarding for clients, helping to bring their ideas to life and create something they are proud to release. He strives for continuous self-improvement through experiences like touring with bands, completing internships, and attending Full Sail University.
Actividad 4.1 (principales representantes del conductismo y constructivismo)Lenin Canduelas
El documento resume las biografías y aportes de cuatro importantes figuras en psicología y educación: John B. Watson, fundador del conductismo; Burrhus Frederic Skinner y sus investigaciones sobre el refuerzo; Iván Pavlov y su trabajo sobre los reflejos condicionados; y Jean Piaget y su teoría constructivista sobre el desarrollo cognitivo infantil. Explica brevemente sus principales ideas y contribuciones al campo de la educación.
Johanna Jeung is a senior at Clarkston High School in Clarkston, Michigan with a 4.2 GPA. She has worked various jobs including at American Eagle Outfitters, Sagano Japanese Bistro, and as a private dog walker. She has shadowed doctors in various fields. Her extensive volunteer and leadership experience includes with organizations like Clarkston Help, National Art Honors Society, and UNICEF. She has received several awards for her art and academics. Her extracurricular activities demonstrate leadership, community service, and passion for the arts.
O documento descreve 30 tipos de beijos de acordo com o Kama Sutra, incluindo beijos laterais, inclinados e diretos, além de mordidas em diferentes partes do corpo para aumentar o prazer durante a relação sexual.
20140528 - ESGs (Czech Society of Actuaries) - Shaun LazzariShaun Lazzari
This document discusses testing and validating stochastic economic scenarios. It covers:
1) Using economic scenario generators (ESGs) to generate scenarios for variables like interest rates, equities, and credit spreads for purposes like valuation and risk analysis.
2) Formulating calibration assumptions, which involves calibrating models to market data while addressing data limitations.
3) Validating scenario sets through analyses like no-arbitrage tests, market consistency checks, and assessing distributional features to ensure scenarios are reasonable.
La estudiante Lucía Escalera Aguilar creó una lotería educativa sobre la fauna mexicana para enseñar a los niños los nombres de 25 animales en náhuatl. Cada tarjeta presenta un animal mexicano con su nombre en náhuatl e incluye un dicho sobre el animal en la parte posterior. El estilo de la lotería se inspiró en el cubismo con líneas rectas y alto contraste, y la paleta de colores incluye los colores mexicanos tradicionales de rosa, amarillo y verde.
The document discusses the Clover Mini, a new payment terminal from First Data Solutions that is small, flexible, and easy to use. It accepts multiple payment types like chip cards and contactless payments. It protects customer information with built-in security and allows businesses to offer loyalty programs and access business insights. The Clover Mini runs on cloud-based software for easy access from any device.
The document discusses Genpact Sports Day, a corporate social responsibility event held in Bucharest, Romania. It was the largest CSR event in Europe, with over 3,000 Genpact employees volunteering to organize sports activities and donate equipment for underprivileged children. Employees helped prepare for the event, engaged teams of children, and helped generate impact through their efforts. The event was a success in executing Genpact's mission and making a positive difference in the community.
The document discusses research conducted into music videos to inspire a fun and upbeat music video with a simple story line. Research on YouTube and Google found videos like "Easy Love" by Sigala and "Midnight Memories" by One Direction that influenced elements like a party scene. Audience research on mobile phones provided insights into the preferred style and image of the target demographic. Various technologies like the internet, cameras, and editing software were used to develop the music video, conduct location scouting, film, add special effects, and ensure it was synchronized to the music. Feedback was gathered through surveys to improve the video, digipak, and website created to promote the artist.
This document discusses extending predictive stability indicators (PSI) to multi-degree-of-freedom systems for use in configuring real-time hybrid simulations (RTHS). RTHS combines physical and numerical substructures, with actuators enforcing interface conditions. PSI previously assessed how partitioning choices impact RTHS stability for single-degree systems. The study develops a novel matrix method to analytically solve delay differential equations and obtain the PSI for linear multi-degree-of-freedom systems. Through examples, the MDOF PSI is demonstrated and validated, including comparisons to models including actuator and control dynamics. The results show the PSI can effectively assess RTHS configurations for stability prior to testing.
Parameter Estimation using Experimental Bifurcation DiagramsAndy Salmon
The document discusses parameter estimation of an aerodynamic model from experimental bifurcation diagrams. It summarizes an experiment that captured dynamic characteristics of a scale aircraft model, observing a post-stall pitch oscillation. The author proposes a novel method of parameter estimation using bifurcation analysis rather than time domain analysis to estimate dynamic parameters governing the limit cycle oscillation. However, problems were encountered in that the equations of motion were incomplete and inaccuracies in a numerical simulation prevented successful implementation of the bifurcation-based estimation method. Further work is needed to fully understand and model the system to allow the proposed approach.
1) The document compares the accuracy of empirical (HOSE code, neural network) and quantum-mechanical (QM) methods for predicting 13C NMR chemical shifts.
2) It analyzes 205 molecules where experimental and QM-calculated 13C shifts were published, and calculates shifts using HOSE code, neural network, and QM methods.
3) The mean absolute errors (MAE) were 1.58 ppm for HOSE code, 1.91 ppm for neural network, and 3.29 ppm for QM methods, indicating that the empirical methods provided more accurate predictions for this data set on average.
This document summarizes a research article that proposes using Hidden Semi-Markov Models (HSMMs) for predictive maintenance applications. Some key points:
- HSMMs allow modeling the duration a system spends in each state, which provides more accurate modeling than traditional HMMs for applications where state duration is important.
- The proposed HSMM models state duration with a parametric distribution rather than a non-parametric one, reducing the number of parameters needed. It also does not constrain the type of duration distribution or observation process used.
- The paper describes adapting learning, inference and prediction algorithms for the proposed HSMM. It also proposes using the Akaike Information Criterion for automated model selection
Overview combining ab initio with continuum theoryDierk Raabe
Multi-methodological approaches combining quantum-mechanical and/or atomistic simulations
with continuum methods have become increasingly important when addressing multi-scale phenomena in
computational materials science. A crucial aspect when applying these strategies is to carefully check,
and if possible to control, a variety of intrinsic errors and their propagation through a particular multimethodological
scheme. The first part of our paper critically reviews a few selected sources of errors
frequently occurring in quantum-mechanical approaches to materials science and their multi-scale propagation
when describing properties of multi-component and multi-phase polycrystalline metallic alloys.
Our analysis is illustrated in particular on the determination of i) thermodynamic materials properties at
finite temperatures and ii) integral elastic responses. The second part addresses methodological challenges
emerging at interfaces between electronic structure and/or atomistic modeling on the one side and selected
continuum methods, such as crystal elasticity and crystal plasticity finite element method (CEFEM and
CPFEM), new fast Fourier transforms (FFT) approach, and phase-field modeling, on the other side.
2-DOF Block Pole Placement Control Application To: Have-DASH-IIBITT MissileZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
2-DOF Block Pole Placement Control Application To: Have-DASH-IIBITT MissileZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
2-DOF BLOCK POLE PLACEMENT CONTROL APPLICATION TO:HAVE-DASH-IIBTT MISSILEZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
Maximum likelihood estimation-assisted ASVSF through state covariance-based 2...TELKOMNIKA JOURNAL
The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurement
to the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC).
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
This document discusses the concept of equifinality in complex environmental systems modeling. Equifinality refers to the idea that there are many different model structures and parameter sets that can produce similar and acceptable results when modeling system behavior. The generalized likelihood uncertainty estimation (GLUE) methodology is described as a way to account for equifinality by using ensembles of behavioral models weighted by their likelihood to estimate prediction uncertainties. An example application to rainfall-runoff modeling is used to illustrate the GLUE methodology.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
The document proposes a new method called the Brownian correlation metric prototypical network (BCMPN) for fault diagnosis of rotating machinery. The BCMPN uses a multi-scale mask preprocessing mechanism to improve model performance. It extracts multi-scale features using dilation convolution and an effective light channel attention module. For classification, it measures the difference between the joint feature function and product of marginal distributions using Brownian distance, unlike existing methods that use Euclidean or cosine distance. Experiments on gear dataset and laboratory data show the BCMPN performs better than other methods for problems with few training samples and zero samples in the target domain.
This document describes two distributed-memory parallelization schemes for efficiently parallelizing an explicit time-domain volume integral equation solver on the IBM Blue Gene/P supercomputer. The first scheme distributes the computationally intensive tested field computations among processors while storing the source field time histories on each processor, requiring all-to-all global communications. The second scheme distributes both the source fields and tested field computations, requiring sequential global communications. Numerical results show that both schemes scale well on Blue Gene/P, and the second more memory-efficient scheme allows solving problems with up to 3 million unknowns without acceleration. The parallel solver is demonstrated on the problem of light scattering from a red blood cell.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
ADAPTIVE SEGMENTATION OF CELLS AND PARTICLES IN FLUORESCENT MICROSCOPE IMAGEJournal For Research
The document presents an adaptive segmentation method for segmenting cells and particles in fluorescent microscope images. It involves applying a coherence-enhancing diffusion filter to reduce noise and enhance structures, followed by using the Chan-Vese model to detect cell boundaries. The method allows simultaneous tracking of multiple cells over time by integrating both fast level set and graph cut frameworks with a topological prior. It is demonstrated on 2D and 3D time-lapse images of stem cells and carcinoma cells.
Fault tolerant synchronization of chaotic heavy symmetric gyroscope systems v...ISA Interchange
In this paper, fault tolerant synchronization of chaotic gyroscope systems versus external disturbances via Lyapunov rule-based fuzzy control is investigated. Taking the general nature of faults in the slave system into account, a new synchronization scheme, namely, fault tolerant synchronization, is proposed, by which the synchronization can be achieved no matter whether the faults and disturbances occur or not. By making use of a slave observer and a Lyapunov rule-based fuzzy control, fault tolerant synchronization can be achieved. Two techniques are considered as control methods: classic Lyapunov-based control and Lyapunov rule-based fuzzy control. On the basis of Lyapunov stability theory and fuzzy rules, the nonlinear controller and some generic sufficient conditions for global asymptotic synchronization are obtained. The fuzzy rules are directly constructed subject to a common Lyapunov function such that the error dynamics of two identical chaotic motions of symmetric gyros satisfy stability in the Lyapunov sense. Two proposed methods are compared. The Lyapunov rule-based fuzzy control can compensate for the actuator faults and disturbances occurring in the slave system. Numerical simulation results demonstrate the validity and feasibility of the proposed method for fault tolerant synchronization.
During the process of molecular structure elucidation the selection of the most probable structural hypothesis may be based on chemical shift prediction. The prediction is carried out using either empirical or quantum-mechanical (QM) methods. When QM methods are used, NMR prediction commonly utilizes the GIAO option of the DFT approximation. In this approach the structural hypotheses are expected to be investigated by scientist. In this article we hope to show that the most rational manner by which to create structural hypotheses is actually by the application of an expert system capable of deducing all potential structures consistent with the experimental spectral data and specifically using 2D NMR data. When an expert system is used the best structure(s) can be distinguished using chemical shift prediction, which is best performed either by an incremental or neural net algorithm. The time-consuming QM calculations can then be applied, if necessary, to one or more of the 'best' structures to confirm the suggested solution.
A systematic approach for the generation and verification of structural hypot...
8-CU-NEES-08
1. CU-NEES-06-8
NEES at CU Boulder
The George E Brown, Jr. Network for Earthquake Engineering Simulation
01000110 01001000 01010100
The CU-Boulder Fast Hybrid Test
Integration Schemes for
Fast Hybrid Testing
by
Dr. Eric Stauffer
Technical Director
Department of Civil Environmental and Architectural Engineering
September 2006 University of Colorado
UCB 428
Boulder, Colorado 80309-0428
2. 1 ABSTRACT 1
1 Abstract
The Fast Hybrid Testing (FHT) system at the University of Colorado (CU) enhances the con-
ventional pseudodynamic testing method by facilitating real-time or close to real-time with
consistent scaling of the temporal component of earthquake simulations. The CU FHT system
achieves a rate of loading that is significantly higher than that of conventional pseudodynamic
testing and has achieved hard realtime for a variety of test configurations. In general hybrid
simulation presents a broad set of challenges in that both a numerical and experimental struc-
ture are ultimately simultaneously involved and indeed interacting in the simulation. Until
fairly recently realtime hybrid simulation was not feasible owing to computational and tech-
nological limitations. Advances in realtime networking technology, realtime operating systems
and computation efficiency have made realtime earthquake simulation within the context of a
hybrid model possible. This paper summarizes recent advances in the techniques utilized at the
CU NEES Fast Hybrid Testing Facility in particular the numerical integration scheme which
is so central to FHT. The FHT system at CU is part of the George E. Brown, Jr. Network for
Earthquake Engineering Simulations (NEES).
2 Introduction
Central to the task of hybrid simulation is the numerical engine that drives the experimental
component in unison with the numerical component of a test. The modular nature of finite
element modeling is ideally suited to this task and has been used widely for hybrid testing.
In the past these techniques where called pseudo-dynamic and involved an every changing
distortion of the time scaling of a simulation. This distortion is the result of the variability
in the time it takes to complete the computation for each time step and then update the
command signals controlling a portion of the displacement field for the experimental component.
This approach remains the state of the art in many research laboratories throughout the US.
However, with the increased availability of greater computational power, realtime networking,
and realtime operating systems it is possible to maintain a consistent scaling of time throughout
a simulation. Limited by computational speed and model complexity hard realtime is in fact
possible. Realtime here implies a 1-to-1 scaling of prototype vibration to simulation vibration
with no temporal distortion.
Direct integration methods are employed to establish equilibrium and displacement con-
tinuity at discrete intervals of time specified by a time step . A wide variety of integration
techniques are standard features in most FEM software packages. Fundamentally these tech-
niques can be categorized as either explicit or implicit each having inherent limitations and
capabilities. The explicit schemes are attractive in there relative simplicity and numerical effi-
ciency but place a limitation on the size of the time step owing to issues of numerical stability.
The implicit schemes are typically more complex, involving iteration for the nonlinear case,
but having more favorable stability characteristics. In fact for linear models and with properly
selected integration parameters it has been established that implicit schemes such as the α
method are unconditionally stable and despite the presence of high frequencies within a model
an arbitrarily large time step may be used. For this reason the method has been selected as
the basis for the direct integration routine used in the CU NEES FHT system.
In applying direct integration techniques to a consistently time scaled hybrid simulation it
is necessary to place constraints and special features in the numerical implementation. Most of
these constraints and features are a result of particular needs stemming from the experimental
CU-NEES-06-08 CU-Boulder FHT Integration Schemes
3. 3 BACKGROUND 2
or physical component of a hybrid simulation. The need for these constraints will be explained
in more detail shortly. These constraints and features require access and modification to the in-
tegration algorithm and other aspects of the FEM source code. Currently the CU NEES facility
is using OpenSees for most of its hybrid simulation needs. OpenSees is a fully object oriented
modeling framework intended primarily for the earthquake engineering research community
and is open-source. OpenSees is supported by a small staff of programmers at the University
of California Berkeley and remains affiliated with the Pacific Earthquake Engineering Center
(PEER). Recently at the CU NEES facility effort has been directed toward the generalization
of the added hybrid code segments such that they can be used within other FEM codes or act as
a basis for the development of a new code that is specifically designed for the unique demands
of hybrid simulation.
3 Background
The pusedodynamic test method originally developed by Takanashi et al (1975) [?] provides
a systematic means of recreating earthquake like loading on a test component by directly
integrating a discrete representation of the governing equations of motion. The reduction of the
structure from a continuum to a finite set of discrete equations may be achieved by application
of the finite element method resulting in a second order ordinary differential equation.
Ma + Cv + r(x) = f (1)
where M and C are the mass and viscous damping matrices for the idealization of the
structure, r the restoring force vector resulting from deformation x, and f is the vector of ap-
plied forces due to a seismic event or some other dynamic stimulus. Typical application of the
pseudodynamic test method proceeds on the basis that M and C are known and understood
sufficiently to be represented purely numerically while some portion or perhaps the entirety of
the restoring force is determined experimentally due some uncertainty or potentially complex
nonlinear behavior. For the conventional pseudodynamic test method the physical test compo-
nent is subjected to displacements on relevant degrees of freedom in some quasi-static manner,
typically ramp and hold. The duration of the ramp and hold phase of each time step may be
set arbitrarily large to accommodate inadequacies in the performance of the testing equipment,
communication latency, and/or computational speed. Recently efforts have focused on faster
and perhaps continuous application of motion to the physical test component (Magonnete 2001,
Nakashima et al. 1992, Horiuchi et al. 1996, Darby et al. 1999, Shing et al. 2002, Mosqueda et
al 2005). These efforts culminate with simulations that are conducted in realtime and preserve
the original rate of loading thereby reducing the compromising effects of load relaxation, re-
duced and/or inconsistent strain rates and relaxed or neglected similitude laws. Rate sensitive
devices such as semi-active dampers, i.e. magnetorheological dampers, require that hybrid sim-
ulations be conducted in realtime. The Fast Hybrid Testing (FHT) system at the University of
Colorado NEES facility utilizes a customized unconditionally stable implicit integration scheme
(Hilber, Hughes and Taylor 19??) to achieve fast and continuous motion. Realtime performance
has been achieved for several different experimental test configurations, most recently involving
200 K Newton MR dampers. The method used to maintain displacement continuity and force
equilibrium throughout the numerical integration process for the hybrid test structure will be
the focus of this paper.
CU-NEES-06-08 CU-Boulder FHT Integration Schemes
4. 4 A FAST HYBRID TESTING SYSTEM WITH REALTIME CAPABILITIES3
4 A Fast Hybrid Testing System with Realtime Capabilities
The hybrid simulation capabilities at the CU NEES facility are based on a constrained imple-
mentation of the α method (Hughes, 1983). The favorable stability and damping properties
of this method and its successful application to conventional pseudodynamic tests (Shing et
al. 1991) make it well suited for a fast and continuous testing system. With the understand-
ing that the externally applied force vector is balanced by inertial, damping, and restorative
force components and that each of these components is composed of contributions from both
the numerical (FEM) and experimental portions of the hybrid simulation the damping term is
generalized so as to admit nonlinear behavior. In so doing equation 1 is modified to include the
nonlinear damping term s
Ma + s(v) + r(x) = f. (2)
When considering a hybrid structure each of the three terms Ma, s(v) and r(x) may be
expanded so as to make explicit the hybrid nature of this representation.
Ma = (Mexp + MFEM )a (3)
s(v) = sexp(v) + CFEM (v) (4)
r(x) = rexp(x) + rFEM (x) (5)
The discrete time equilibrium equations for this representation of a second order dynamic
system are
Mai+1 + (1 + α)si+1 − αsi + (1 + α)ri+1 − αri = (1 + α)fi+1 − αfi (6)
di+1 = di + ∆tvi + ∆t2 1
2
− β ai + βai+1 (7)
vi+1 = vi + ∆t [(1 − γ) ai + γai+1] (8)
Where M is the mass matrix, s is the damping force vector assumed to be a nonlinear
function of the velocity vector, r and f are the restoring force and external applied force
vectors respectively.
This direct method of time integration determines equilibrium at equally spaced time inter-
vals which herein will be referred to as the integration interval. In order to allow for nonlinear
structural response it necessary to include an iteration capability that converges to the equilib-
rium condition within each integration interval. A modified Newton-Raphson iteration method
is applied to the discrete equations of motion (equation 2). The finite number of iterations,
which will be constrained to a fixed and constant number (Shing et. Al 2002) act to subdivide
the integration interval into n iteration intervals. If each interval is given equal time weighting
the iteration interval may be expressed as
δt = ∆t/l (9)
Where l is the integer number of Newton iteration intervals, and are the time intervals
associated with the integration interval and iteration intervals respectively. By fixing l to be
a constant integer value a favorable degree of determinism is achieved which is important for
realtime integration and hybrid testing. This determinism comes at the price of constraining
CU-NEES-06-08 CU-Boulder FHT Integration Schemes
5. 4 A FAST HYBRID TESTING SYSTEM WITH REALTIME CAPABILITIES4
1nx
nx
3
1nx
2
1nx
1
1nx
t Prototype Time (seconds)
Simulation Time (seconds)
1nt nt 1nt
t
t
x
Figure 1: Command Interpolation during Newton Iteration
the calculation of equilibrium to a limited number of Newton iterations and a fixed interval of
time in the case of consistently scaled and realtime simulations. Experience at the CU NEES
FHT facility has indicated that ∆t = 0.01 and δt = 0.001 are reasonable values that balance
the need for accuracy and speed.
A simplified discrete representation of the force equilibrium equation in residual form is
obtained by solving 7 for ai+1 and substituting into equation 6
fr = Mdn+1 + ce + c1(sn+1 + rn+1) (10)
where
ce = −M[dn + ∆tvn + ∆t2
(0.5 − β)an] − ∆t2
β[α(sn + rn − fn) + (1 + α)fn+1] (11)
c1 = ∆t2
β(1 + α) (12)
Equation 10 has two unknowns, sn+1 and vn+1 , which are independent variables for the
damping and stiffness terms respectively. Each is treated as general nonlinear relationship. By
combining equations 7 and 8 an equation expressing vn+1 in terms of known quantities and
dn+1 is obtained
vn+1 =
γ
β
1
∆t
(dn+1 − dn) + (
β
γ
− 1)vn + ∆t
β
γ
−
1
2
an (13)
With equations 10 and 13 a general modified Newton iteration procedure can be used to
solve for the unknown discrete displacement field dn+1. The iterative solution procedure is
based on the linearized Taylor series representation of the residual equilibrium equation
fr(dn+1 + ∆d) ≈ fr(dn+1) +
∂fr
∂dn+1
∆d. (14)
Where successive displacements increments ∆d are computed until a convergence criterion
is satisfied. With appropriate consideration of the nonlinear damping and stiffness terms the
Jacobian may be expressed in the form
∂fr
∂dn+1
= M + c1
∂sn+1
∂vn+1
∂vn+1
∂dn+1
+
∂rn+1
∂fn+1
(15)
CU-NEES-06-08 CU-Boulder FHT Integration Schemes
6. 5 CONCLUSION 5
For the purpose of efficiency within the numerical integration both the stiffness and the
damping terms are approximated with the linearized initial stiffness and initial damping ma-
trices Ki and Ci.
∂fr
∂dn+1
= M + c1 Ci
∂vn+1
∂dn+1
+ Ki. (16)
Finally a single equation is obtained which is repeatedly solved until the desired level of
accuracy is obtained
dk+1
n+1 = dk
n+1 − m + c1(
α
∆tjβ
Ci + Ki)
−1
fr(dk
n+1). (17)
The indices n and k indicate the integer value for the time step and the Newton iteration
number respectively. In preliminary testing this new integration scheme has proven to be equal
or superior to the prior scheme (restricted to linear viscous damping) both in terms of accuracy
and rate of convergence.
5 Conclusion
The George E. Brown Jr. Network for Earthquake Engineering Simulation (NEES) is made up
of 15 advanced research laboratories at universities in the US and is intended to advance the
state of the art in earthquake engineering research. These 15 laboratories are linked to each
other with grid software that facilitates collaboration, remote participation, distributed testing,
and a single data archive that acts as a long term repository for test results and documentation.
In 2001 the NSF selected the University of Colorado as one of the 15 prominent universities to
receive support under this program. The facility at CU specializes in realtime or Fast Hybrid
Testing (FHT) and its application to vibration testing and simulation in earthquake engineering.
The FHT Facility at the University of Colorado has, by design, been developed with real-
time hybrid simulation capabilities in mind. Realtime hybrid simulation which synchronously
combines numerical and experimental test components into a hybrid test setting must operate
within an efficient and deterministic computational environment. To achieve this all critical
computational, control and measurement systems utilize realtime operating systems and are
networked to one another with a realtime shared memory network. Additionally, all physical
testing hardware which is currently exclusively servo-hydraulic must also have high-performance
dynamic capabilities. The need for high performance capabilities must not compromise the pre-
cise and stable control of the multi-actuator testing hardware. This is achieved using state of
the art hydraulic equipment provided by MTS Corporation. This custom equipment and testing
technology is maintained and operated by a skilled professional staff that was also integral to
the development and commissioning of this unique testing system. The direct time integration
scheme presented here is specifically designed for realtime hybrid simulations.
6 Acknowledgements
The financial support of CU-NEES is gratefully acknowledged.
CU-NEES-06-08 CU-Boulder FHT Integration Schemes