1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.
A High Order Continuation Based On Time Power Series Expansion And Time Ratio...IJRES Journal
In this paper, we propose a high order continuation based on time power series expansion and time rational representation called Pad´e approximants for solving nonlinear structural dynamic problems. The solution of the discretized nonlinear structural dynamic problems, by finite elements method, is sought in the form of a power series expansion with respect to time. The Pad´e approximants technique is introduced to improve the validity range of power series expansion. The whole solution is built branch by branch using the continuation method. To illustrate the performance of this proposed high order continuation, we give some numerical comparisons on an example of forced nonlinear vibration of an elastic beam.
A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...IJERA Editor
In this paper, a novel method called Laplace-differential transform method (LDTM) is used to obtain an
approximate analytical solution for strong nonlinear initial and boundary value problems associated in
engineering phenomena. It is determined that the method works very well for the wide range of parameters and
an excellent agreement is demonstrated and discussed between the approximate solution and the exact one in
three examples. The most significant features of this method are its capability of handling non-linear boundary
value problems.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A High Order Continuation Based On Time Power Series Expansion And Time Ratio...IJRES Journal
In this paper, we propose a high order continuation based on time power series expansion and time rational representation called Pad´e approximants for solving nonlinear structural dynamic problems. The solution of the discretized nonlinear structural dynamic problems, by finite elements method, is sought in the form of a power series expansion with respect to time. The Pad´e approximants technique is introduced to improve the validity range of power series expansion. The whole solution is built branch by branch using the continuation method. To illustrate the performance of this proposed high order continuation, we give some numerical comparisons on an example of forced nonlinear vibration of an elastic beam.
A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...IJERA Editor
In this paper, a novel method called Laplace-differential transform method (LDTM) is used to obtain an
approximate analytical solution for strong nonlinear initial and boundary value problems associated in
engineering phenomena. It is determined that the method works very well for the wide range of parameters and
an excellent agreement is demonstrated and discussed between the approximate solution and the exact one in
three examples. The most significant features of this method are its capability of handling non-linear boundary
value problems.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
The compression is a process of Image Processing which interested to change the information representation in order to reduce the stockage capacity and transmission time. In this work we propose a new image compression algorithm based on Haar wavelets by introducing a compression coefficient that controls the compression levels. This method reduces the complexity in obtaining the desired level of compression from the original image only and without using intermediate levels.
A Regularization Approach to the Reconciliation of Constrained Data SetsAlkis Vazacopoulos
A new iterative solution to the statistical adjustment of constrained data sets is derived in this paper. The method is general and may be applied to any weighted least squares problem containing nonlinear equality constraints. Other methods are available to solve this class of problem, but are complicated when unmeasured variables and model parameters are not all observable and the model constraints are not all independent. Of notable exception however are the methods of Crowe (1986) and Pai and Fisher (1988), although these implementations require the determination of a matrix projection at each iteration which may be computationally expensive. An alternative solution is proposed which makes the pragmatic assumption that the unmeasured variables and model parameters are known with a finite but equal uncertainty. We then re-formulate the well known data reconciliation solution in the absence of these unknowns to arrive at our new solution; hence the regularization approach. Another procedure for the classification of observable and redundant variables is also given which does not require the explicit computation of the matrix projection. The new algorithm is demonstrated using three illustrative examples previously used in other studies.
Optimising Data Using K-Means Clustering AlgorithmIJERA Editor
K-means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed a priori. The main idea is to define k centroids, one for each cluster. These centroids should be placed in a cunning way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
9+-
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
The compression is a process of Image Processing which interested to change the information representation in order to reduce the stockage capacity and transmission time. In this work we propose a new image compression algorithm based on Haar wavelets by introducing a compression coefficient that controls the compression levels. This method reduces the complexity in obtaining the desired level of compression from the original image only and without using intermediate levels.
A Regularization Approach to the Reconciliation of Constrained Data SetsAlkis Vazacopoulos
A new iterative solution to the statistical adjustment of constrained data sets is derived in this paper. The method is general and may be applied to any weighted least squares problem containing nonlinear equality constraints. Other methods are available to solve this class of problem, but are complicated when unmeasured variables and model parameters are not all observable and the model constraints are not all independent. Of notable exception however are the methods of Crowe (1986) and Pai and Fisher (1988), although these implementations require the determination of a matrix projection at each iteration which may be computationally expensive. An alternative solution is proposed which makes the pragmatic assumption that the unmeasured variables and model parameters are known with a finite but equal uncertainty. We then re-formulate the well known data reconciliation solution in the absence of these unknowns to arrive at our new solution; hence the regularization approach. Another procedure for the classification of observable and redundant variables is also given which does not require the explicit computation of the matrix projection. The new algorithm is demonstrated using three illustrative examples previously used in other studies.
Optimising Data Using K-Means Clustering AlgorithmIJERA Editor
K-means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed a priori. The main idea is to define k centroids, one for each cluster. These centroids should be placed in a cunning way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
9+-
Corporate presentation of CADD Centre Software Solution. It describes about us, who we are and what are the recent services we have done to the industries
Choice of Numerical Integration Method for Wind Time History Analysis of Tall...inventy
Wind tunnel tests are being performed routinely around the world for designing tall buildings but the advent of powerful computational tools will make time-history analysis for wind more common in near future. As the duration of wind storms ranges from tens of minutes to hours while earthquake durations are typically less than a three to four minutes, the choice of a time step size (Δt) for wind studies needs to be much larger both to reduce the computational time and to save disk space. As the error in any numerical solution of the equation of motion is dependent on step size (Δt), careful investigations on the choice of numerical integration methods for wind analyses are necessary. From a wide variety of integration methods available, it was decided to investigate three methods that seem appropriate for 3D-time history analysis of tall buildings for wind. These are modal time history analysis, the Hilber-Hughes-Taylor (HHT) method or α-method with α=- 0.1, and the Newmark method with β=0.25 and γ=0.5 ( i.e., trapezoidal rule). SAP2000, a common structural analysis software tool, and a 64-story structure are used to conduct all the analyses in this paper. A boundary layer wind tunnel (BLWT) pressure time history measured at 120 locations around the building envelope of a similar structure is used for the analyses. Analyses performed with both the HHT and Newmark-method considering P-delta effects show that second order effects have a considerable impact on both displacement and acceleration response. This result shows that it is necessary to account P-delta effect for wind analysis of tall buildings. As the direct integration time history analysis required very large computation times and very large computer physical memory for a wind duration of hours, a modal analysis with reduced stiffness is considered as a good alternative. For that purpose, a non-linear static analysis of the structure with a load combination of 1.0D + 1.0L is performed in SAP2000 and the reduced stiffness of the structure after the analysis is used to conduct an eigenvalue analysis to extract the mode shapes and frequencies of this structure. Then the first 20- modes are used to perform a modal time history analysis for wind load. The result shows that the responses from modal analysis with “20-mode (reduced stiffness)” are comparable with that from the P-Δ analyses of Newmark-method
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
2. property of multigrid method is the independence of the number of iterations from the number of grid nodes, in the
finest grid, to obtain the convergence. The application of the multigrid method results in an approximately linear
increase of CPU time with grid refinement, allowing the resolution of problems in much finer grids, and, therefore,
more accurate solutions can be obtained (Hortmann and Peric, 1990).
Several multigrid algorithms can be found in literature, and they can be divided in two different schemes: CS
(Correction Scheme) and FAS (Full Approximation Scheme). The two schemes can be implemented computationally
with the V-Cycle, W-Cycle, F-Cycle, Full Multigrid (FMG) and other methodologies (Briggs et al, 2000; Trottenberg,
2001). CS-scheme is generally used in linear problems and FAS in nonlinear ones (Brandt, 1977).
All the algorithms are dependent on parameters that influence the CPU time. Manipulations in the parameter values
of the multigrid method can improve the convergence rate by a factor next to 2, using the best combination of these
parameters (Ferziger and Peric, 1999). Many of these parameters are studied and optimized by Pinto et al. (2005) for
linear advection problems, advection-diffusion and Burger’s equation. Pinto and Marchi (2006) made an analysis of CS
and FAS schemes with some solvers and standard coarsening ratio for the Laplace’s equation and suggested the use of
the maximum possible number of grids. Tannehill et al. (1997) affirmed that the optimum performance of the multigrid
method is obtained with diverse grids and suggested the use of 5 or 6 grids for the 2D-Laplace problem with a 129x129
nodes grid. Oliveira et al. (2006) also made a study to find the optimum values for some parameters of the method
multigrid in linear and nonlinear one-dimensional problems.
In this work, it is proposed for the obtainment of the optimum values for the geometric multigrid method, for the
CPU time optimization, for steady-state, two-dimensional linear model of thermoelasticity (TE), with two coupled
equations and Dirichlet boundary conditions. It is intended to verify if the performance of multigrid has any change
when compared to a one equation problem. The following parameters of the multigrid method are analyzed: the solver
inner iterations number (ITI), the number of grids (L) and the number of variables (N). The results are compared to a
two-dimensional diffusion problem, using Laplace problem (LP), which is solved by both multigrid and single-grid
methods, and with other results from literature. The analyses are made considering the results of the multigrid method,
methods for only one grid (single-grid) and the Gauss Elimination (direct) method. The effect of the functions/variables
coupling and , that appears in the two equations, is also studied to verify if it interferes in the iterative procedure
performance, when the multigrid method is used. The multigrid method properties are not preserved when applied to the
Navier-Stokes equations, with high Reynolds numbers (Ferziger and Peric, 1999). Therefore, one also intends to apply
the conclusions of this qualitative study to Navier-Stokes equations in alternative formulations without pressure-
velocity coupling.
u v
This text is organized as follows: in Section 2 the mathematical and numerical models are presented. In Section 3,
the computational code is commented. In Section 4, the results are presented and in Section 5 the general conclusions
are shown.
2. Mathematical and numerical models
The results of thermoelasticity problem are compared with the ones of a linear problem of two-dimensional heat
conduction (Laplace problem), both in steady state and with Dirichlet boundary conditions, in cartesian coordinates.
Problem 1: The constitutive equations of two-dimensional steady state linear thermoelasticity problem, for elastic
bodies, whose materials are homogeneous and isotropic (from the Hooke’s law), can be reduced to two differential
partial equations, written in terms of the displacements
u
S
x
T
C
y
u
x
u
y
v
x
u
x
C +
∂
∂
=
∂
∂
+
∂
∂
+⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
∂
∂
+
∂
∂
∂
∂
αλλ 22
2
2
2
(3)
v
S
y
T
C
y
v
x
v
y
v
x
u
y
C +
∂
∂
=
∂
∂
+
∂
∂
+⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
∂
∂
+
∂
∂
∂
∂
αλλ 22
2
2
2
(4)
where
λ
λ
λ
−
+
=
1
1
C , and λ is the Poisson’s ratio, α is the coefficient of thermal expansion and u and are
displacements in coordinate directions x and y, respectively. The temperature field is given by the analytical solution of
the two-dimensional problem diffusion,
v
( ) ( ) ( )
)sinh(
sinh
sin,
π
π
π
y
xyxT = (5)
The analytical solution proposed for the system formed for Eqs. (3) and (4) is
4. ( ) ( ) 0,00, == yTxT , and( ) xxT =1, ( ) yyT =,1
where T represents the temperature. The analytical solution of this problem is given by
( ) xyyxT =, (13)
The numerical model adopted to solve the Laplace problem is the same used for the thermoelasticity problem.
In this work, the geometric multigrid (Wesseling and Oosterlee, 2001) is adopted, with the V-Cycle CS scheme for
both problems. In this scheme the transferred information in each grid level are the residue (in the restriction) and the
correction of the solution (in the prolongation). In the CS scheme, the Eq. (1) is solved only in the finest grid; in coarse
grids, only the residual equation is solved (Briggs et al., 2000). The correction is transferred to be added to the solution
of the current grid and, therefore, the next refined grid has its initial estimate trought up to date with a correction value
that will contribute for the elimination of the low frequency errors (Trottenberg, 2001). In this work, the injection
restriction and the prolongation by bilinear interpolation are adopted.
The ideal solver to be used with the multigrid method is one which has good smooth properties, for example, the
Gauss-Seidel method (Briggs et al., 2000). Modified Strongly Implicit Method – MSI (Schneider and Zedan, 1981),
presented better performance than the Gauss-Seidel one, as previous results of Pinto and Marchi (2006). Here, the MSI
method was chosen as standard solver, and the coarsening ratio is equal to 2 (standard value in literature); this means
that the size of the element on a finer grid is the half size of the element in an immediately coarser grid. Other
coarsening ratios were studied by Pinto et al. (2005) for one-dimensional problems of advection, advection-diffusion
and Burgers’ equation. For the test-problem of thermoelasticity, an algorithm of the multigrid method, CS scheme with
V-Cycle, for two grids is described
Table 1. Scheme CS for two grids with V-Cycle (Adapted from Briggs et al., 2000)
( )hbbvvuu vu ,,,,,, 00LMG
Begin
1. Smooth times with initial guest ;h
u
hh
buAu = ITI h
u0
2. Calculate the residue: ;hhh
u
h
u uAbR u−=
3. Smooth times with initial guest ;h
v
hh
bvAv = ITI h
v0
4. Calculate the residue: ;hhh
v
h
v vAbR v−=
5. Restrict the residue: andh
u
h
h
h
u RIb 22
= h
v
h
h
h
v RIb 22
=
6. Smooth: times with initial guest ;h
u
hh
beA uu
222
= ITI 02
=h
ue
7. Smooth: h
TI times with initial guestv
hh
beA vv
222
= I
02
=h
ve ;
8. Obtain: hh
h v
h
h
h
v eIe 2=
;
times with initial guest h
v ;
( )
h
u ueIe 2
2= a h2
;nd
9. Correct th h
u
h
euu +← and h
v
hh
evv ←e solution: h
+
10. Smooth: times with initial guest ;h
u
hh
buAu = ITI h
u
11. Smooth: h
v
hh
bvAv = ITI
and of hbbvvuu vu ,,,,,, 00LMG
The described algorithm in Tab. 1 is applied for two grids but it can be extended for some grids. In order to facilitate
the notation, the vectorial notation for u
r
, v
r
and b
r
was omitted only in the algorithm. The restriction and prolongation
operations a represented by h
hI 2
and h
re hI2 , respectively. The systems of equations for u
r
and v
r
are smoothed in the
finest grid h
Ω (steps 1 and 3), in order to obtain an approximation of the solution with ITI iterations. The residue is
calculated in steps 2 and 4, as indicated in the Eq. (2), and then it is transferred to the residual source terms (step 5) of
the coarser grid h2
Ω and the system of equations is solved (steps 6 and 7). In step 9, the correction is transferred to the
finest grid and the initial guest is re-estimated. In steps 10 and 11, the systems are solved for the finest grid with a
corrected initial guess. The described algorithm covers only one V-Cycle CS scheme. The following procedure
dev until the achievement of a stop criterion:elops diver e calls ofs LMG
6. two cases can be established as 2=optimumITI . The CPU time observed for the of thermoelasticity problem in the finest
grid is approximately 4 times larger than the time observed for the Laplace problem in the same grid size. The results
show that the coupling of the equations does not influence the optimum number of inner iterations. The results are
sim ar to those prese of heat conduction
(Laplace problem), eve
il nted by Pinto and Marchi (2006) for the two-dimensional linear problem
n using another criterion of tolerance for the convergence.
igure 1. Influence of number of inner iterations on the CPU time.F
4.2. Number of Grids – (L)
The study of the number of uence (L) takes into accoun iterations obtained
previously. The intention is to optimize the CPU time. Figure 2 shows that or CPU time occ the number
of grids around the maximum
grids infl t the number of optimum inner
the min urs with
( )maximumL , or the maximum, that is, ( ) ( )optimumCPUmaximumCPU LtLt ≈ . For the finest grid
considered in this analysis, i.e., 1025x1025 nodes, the difference between maximumL CPU time and optimumL CPU time is
around 2%. For the Laplace’s problem this difference is lower than 0,5%. One notices that when the number of grids is
diminished in relation to the optimum, the CPU time increases very quickly. The simulations wit , 3 and ,
for the largest problem (largest N) for example, require a lot of CPU time. The conclusions of this analysis are the same
ones for both problems studied TE and LP, except by the difference of the CPU time between optimumL and maximumL .
One notices, therefore, that the coupling of the equations of the thermoelastic problem does not influence the optimum
number of grids. Similar results for linear and nonlinear one-dimensional problems have been obtained by Pinto et al.
(2005) and Pinto and Marchi (2006). The results obtained in this work, for the 2D Laplace’s equation with
129x129=N , are according to the conclusions of Tannehill et al. (1997), who affirmed that the use of
h L = 4 2 grids
5 or 6 grids, for
sults in the same performance of 7 grids. Hirsch (1988) cites, in his work, that generally
998) affirms that the use of only 2 levels of grids is not recommended.
4.3
the same problem, practically re
or 5 grids are used. Roache (14
. Number of variables (N)
The optimum number of iterations ( )optimumITI and the optimum number of grids ( )optimumL , obtained previously,
are considered in this study about the influence of the size of the problem on the CPU time. In this analysis, all sizes of
grids are considered, i.e., from the minor 5x5 to the greatest supported by the PC memory, 2049x2049, using the
multigrid method. The results obtained from single-grid method (unique grid) with MSI solver and with the Gauss
Elimination for the thermoelasticity problem are also shown. In this case, the adopted grid were 5x5, 9x9,…, 257x257
and 5x5,…, 33x33, respectively. Very refined grids for the single-grid method require extremely high CPU time, taking
hours or even days for achieving the convergence, using direct method. For the Laplace’s problem, the results of the
single-grid method and the Gauss Elimination are not presented, but they can be found in Pinto and Marchi (2006). For
small grids, the CPU time is about zero. In this case, a methodology was adopted to obtain a time value which
eliminated as possible the CPU time error due to uncertainty of measurement by the TIMEF function. The main idea is
8. The values of p o ethod demonstrate a
weak performance whe
btained for the single-grid method with MSI and the Gauss Elimination m
n increase the size of the problem.
Table 2 c and ed from etric least square fitting for
MSI a s Elim tion in o problems.
Figure 3. Influence of size of problem on the CPU time for the methods
MG-MSI, SG-MSI and Gauss Elimination
. Values of
solvers
p obtain
nd Gaus
geom
ina tw
MG SG
Problem Solver
c p c p
Laplace MSI 2. x10-6
1.18 1.60x10 1.97-8
02
MSI 1.26x10-5
1.17 2.16x10-7
1.90
Thermoela -9sticity
EGauss ---- x1---- 3.90 0 3.35
Table 3. Values c and from etric least square fitting for
rs M problems to 333
of
solve
p obtained
SI in two
geom
3×>N .
MG SG
Problem Solver
c p c p
Laplace MSI 8.71x10-6
1.05 6.62x10-9
2.06
Thermoelasticity MSI 5.23x10-5
1.06 8.03x10-8
2.01
5. Conclusion
In this work, it was analyzed the influence of diverse parameters of the geometric multigrid method, with CS
scheme, on the necessary CPU time to solve problems with two coupled equations and Laplace’s problem (only one
equation). The analyzed parameters were: number of inner iterations )(ITI , number of grids (L) and number of nodes
con opted.
1) inner iterations is 2, in any grid, in the two s. The ITI can affect the CPU time
maximumoptimum
(N). To discretizate the equations the Finite Difference Method with Central Difference Scheme and Dirichlet boundary
ditions was ad
Based on the results of this work, it was verified that:
The optimum number of problem
significantly.
2) The optimum number of grids is around the maximum, i.e., LL ≈ . The number of grids can affect the
CPU time significantly.
3) he coupling of two equations does not degenerate the performance of the multigrid method when compared to theT
case of one equation.