This article presents a novel technique that combines model order reduction and polynomial chaos expansion to efficiently perform variability analysis of electromagnetic systems. The key steps are:
1) Evaluate the original large system of equations at sample points in the stochastic space
2) Calculate a set of reduced order models with a common projection matrix
3) Compute the polynomial chaos expansion of the reduced order models
4) Calculate a polynomial chaos-based augmented system for variability analysis
The technique aims to overcome limitations of prior methods by first reducing the system and then applying polynomial chaos, rather than directly expanding the large original system.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
In this video from the 2015 HPC User Forum in Broomfield, Barry Bolding from Cray presents: HPC + D + A = HPDA?
"The flexible, multi-use Cray Urika-XA extreme analytics platform addresses perhaps the most critical obstacle in data analytics today — limitation. Analytics problems are getting more varied and complex but the available solution technologies have significant constraints. Traditional analytics appliances lock you into a single approach and building a custom solution in-house is so difficult and time consuming that the business value derived from analytics fails to materialize. In contrast, the Urika-XA platform is open, high performing and cost effective, serving a wide range of analytics tools with varying computing demands in a single environment. Pre-integrated with the Hadoop and Spark frameworks, the Urika-XA system combines the benefits of a turnkey analytics appliance with a flexible, open platform that you can modify for future analytics workloads. This single-platform consolidation of workloads reduces your analytics footprint and total cost of ownership."
Learn more: http://www.cray.com/products/analytics/urika-xa
Watch the video presentation: http://wp.me/p3RLEV-3yR
Sign up for our insideBIGDATA Newsletter: http://insidebigdata.com/newsletter
An experimental evaluation of similarity-based and embedding-based link predi...IJDKP
The task of inferring missing links or predicting future ones in a graph based on its current structure
is referred to as link prediction. Link prediction methods that are based on pairwise node similarity
are well-established approaches in the literature and show good prediction performance in many realworld graphs though they are heuristic. On the other hand, graph embedding approaches learn lowdimensional representation of nodes in graph and are capable of capturing inherent graph features,
and thus support the subsequent link prediction task in graph. This paper studies a selection of
methods from both categories on several benchmark (homogeneous) graphs with different properties
from various domains. Beyond the intra and inter category comparison of the performances of the
methods, our aim is also to uncover interesting connections between Graph Neural Network(GNN)-
based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
Hex-Cell is an interconnection network that has attractive features like the embedding capability of topological structures; such as; bus, ring, tree and mesh topologies. In this paper, we present two algorithms for embedding bus and ring topologies onto Hex-Cell interconnection network. We use three metrics to evaluate our proposed algorithms: dilation, congestion, and expansion. Our evaluation results
show that the congestion of our two proposed algorithms is equal to one; and the dilation is equal to 2d-1 for the first algorithm and 1 for the second.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
In this video from the 2015 HPC User Forum in Broomfield, Barry Bolding from Cray presents: HPC + D + A = HPDA?
"The flexible, multi-use Cray Urika-XA extreme analytics platform addresses perhaps the most critical obstacle in data analytics today — limitation. Analytics problems are getting more varied and complex but the available solution technologies have significant constraints. Traditional analytics appliances lock you into a single approach and building a custom solution in-house is so difficult and time consuming that the business value derived from analytics fails to materialize. In contrast, the Urika-XA platform is open, high performing and cost effective, serving a wide range of analytics tools with varying computing demands in a single environment. Pre-integrated with the Hadoop and Spark frameworks, the Urika-XA system combines the benefits of a turnkey analytics appliance with a flexible, open platform that you can modify for future analytics workloads. This single-platform consolidation of workloads reduces your analytics footprint and total cost of ownership."
Learn more: http://www.cray.com/products/analytics/urika-xa
Watch the video presentation: http://wp.me/p3RLEV-3yR
Sign up for our insideBIGDATA Newsletter: http://insidebigdata.com/newsletter
An experimental evaluation of similarity-based and embedding-based link predi...IJDKP
The task of inferring missing links or predicting future ones in a graph based on its current structure
is referred to as link prediction. Link prediction methods that are based on pairwise node similarity
are well-established approaches in the literature and show good prediction performance in many realworld graphs though they are heuristic. On the other hand, graph embedding approaches learn lowdimensional representation of nodes in graph and are capable of capturing inherent graph features,
and thus support the subsequent link prediction task in graph. This paper studies a selection of
methods from both categories on several benchmark (homogeneous) graphs with different properties
from various domains. Beyond the intra and inter category comparison of the performances of the
methods, our aim is also to uncover interesting connections between Graph Neural Network(GNN)-
based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
Hex-Cell is an interconnection network that has attractive features like the embedding capability of topological structures; such as; bus, ring, tree and mesh topologies. In this paper, we present two algorithms for embedding bus and ring topologies onto Hex-Cell interconnection network. We use three metrics to evaluate our proposed algorithms: dilation, congestion, and expansion. Our evaluation results
show that the congestion of our two proposed algorithms is equal to one; and the dilation is equal to 2d-1 for the first algorithm and 1 for the second.
The Effects of Mutual Coupling and Transformer Connection Type on Frequency R...ijsrd.com
in this paper, a novel harmonic modeling technique by utilizing the concept of multi -terminal components is presented and applied to frequency scan analysis in multiphase distribution system. The proposed modeling technique is based on gathering the same phase busses and elements as a separate group (phase grouping technique, PGT) and uses multi-terminal components to model three-phase distribution system. Using multi-terminal component and PGT, distribution system elements, particularly, lines and transformers can effectively be modeled even in harmonic domain. The proposed modeling technique is applied to a test system for frequency scan analysis in order to show the frequency response of the test system in single and three-phase conditions. Consequently, the effects of mutual coupling and transformer connection types on three-phase frequency scan responses are analyzed for symmetrical and asymmetrical line configurations.
Voronoi diagram with fuzzy number and sensor data in an indoor navigation for...TELKOMNIKA JOURNAL
Finding shortest and safest path during emergency situation is critical. In this paper, an indoor navigation during an emergency time is investigated using the combination of Voronoi Diagram and fuzzy number. The challenge in indoor navigation is to analyses the network when the shortest path algorithm does not work as always expected. There are some existing methods to generate the network model. First, this paper will discuss the feasibility and accuracy of each method when it is implemented on building environment. Next, this paper will discuss selected algorithms that determine the selection of the best route during an emergency situation. The algorithm has to make sure that the selected route is the shortest and the safest route to the destination. During a disaster, there are many uncertainties to deal with in determining the shortest and safest route. Fuzzy logic can be hardly called for to deal with these uncertainties. Based on sensor data, this paper will also discuss how to solve shortest path problem using a fuzzy number.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel scheme for reliable multipath routing through node independent direct...eSAT Journals
Abstract Multipath routing is essential in the wake of voice over IP, multimedia streaming for efficient data transmission. The growing usage of such network requirements also demands fast recovery from network failures. Multipath routing is one of the promising routing schemes to accommodate the diverse requirements of the network with provision such as load balancing and improved bandwidth. Cho et al. introduced a resilient multipath routing scheme known as directed acyclic graphs. These graphs enable multipath routing with all possible edges while ensuring guaranteed recovery from single point of failures. We also built a prototype application that demonstrates the efficiency of the scheme. The simulation results revealed that the scheme is useful and can be used in real world network applications.
Index Terms – Multipath routing, failure recovery, directed acyclic graphs
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
A Dependent Set Based Approach for Large Graph AnalysisEditor IJCATR
Now a day’s social or computer networks produced graphs of thousands of nodes & millions of edges. Such Large graphs
are used to store and represent information. As it is a complex data structure it requires extra processing. Partitioning or clustering
methods are used to decompose a large graph. In this paper dependent set based graph partitioning approach is proposed which
decomposes a large graph into sub graphs. It creates uniform partitions with very few edge cuts. It also prevents the loss of
information. The work also focuses on an approach that handles dynamic updation in a large graph and represents a large graph in
abstract form.
Automated Piecewise-Linear Fitting of S-Parameters step-response (PWLFIT) for...Piero Belforte
An innovative full time-domain macromodeling
technique for general, linear multiport systems is described. The
methodology is defined in a digital wave framework and timedomain
simulations are performed via an efficient method called
Segment Fast Convolution (SFC). It is based on a piecewiseconstant
(PWC) model of the impulse response of scattering
parameters, computed starting from a piecewise-linear fitting
of their step response (PWLFIT). Such step response is directly
available from time-domain reflectometer measurements
(TDR/TDT) or equivalent simulations. The model-building phase
is performed in a fast automated framework and an analytic
formulation of computational efficiency of the SFC with respect to
the standard time-domain convolution is given. Two application
examples are used to verify the PWLFIT performance and to
perform a comparison with macromodeling methods defined in
the frequency-domain, such as Vector Fitting (VF).
Index Terms—Digital wave models, time-domain macromodeling,
S-parameters, step response.
A surrogate-assisted modeling and optimization method for planning communicat...Power System Operation
The development of industrial informatization stimulates
the implementation of cyber-physical system (CPS) in
distribution network. As a close integration of the power
network infrastructure with cyber system, the research
of design methodology and tools for CPS has gained
wide spread interest considering the heterogeneous
characteristic. To address the problem of planning
communication system in distribution network CPS, at
first, this paper proposed an optimization model utilizing
topology potential equilibrium. The mutual influence
of nodes and the spatial distribution of topological
structure is mathematically described. Then, facing the
complex optimization problem in binary space with
multiple constraints, a novel binary bare bones fireworks
algorithm (BBBFA) with a surrogate-assisted model
is proposed. In the proposed algorithm, the surrogate
model, a back propagation neural network, replaces the
complex constraints by incremental approximation of
nonlinear constraint functions for reducing the difficulty
in finding the optimal solution. The communication
system planning of IEEE 39-bus power system,
which comprises four terminal units, was optimized.
Considering the different heterogeneous degrees,
four programs were involved in planning for practical
considerations. The simulation results of the proposed
algorithm were compared with other representative
methods, which demonstrated the effective performance
of the proposed method to solve communication system
planning for optimizing problems of distribution
network.
The Effects of Mutual Coupling and Transformer Connection Type on Frequency R...ijsrd.com
in this paper, a novel harmonic modeling technique by utilizing the concept of multi -terminal components is presented and applied to frequency scan analysis in multiphase distribution system. The proposed modeling technique is based on gathering the same phase busses and elements as a separate group (phase grouping technique, PGT) and uses multi-terminal components to model three-phase distribution system. Using multi-terminal component and PGT, distribution system elements, particularly, lines and transformers can effectively be modeled even in harmonic domain. The proposed modeling technique is applied to a test system for frequency scan analysis in order to show the frequency response of the test system in single and three-phase conditions. Consequently, the effects of mutual coupling and transformer connection types on three-phase frequency scan responses are analyzed for symmetrical and asymmetrical line configurations.
Voronoi diagram with fuzzy number and sensor data in an indoor navigation for...TELKOMNIKA JOURNAL
Finding shortest and safest path during emergency situation is critical. In this paper, an indoor navigation during an emergency time is investigated using the combination of Voronoi Diagram and fuzzy number. The challenge in indoor navigation is to analyses the network when the shortest path algorithm does not work as always expected. There are some existing methods to generate the network model. First, this paper will discuss the feasibility and accuracy of each method when it is implemented on building environment. Next, this paper will discuss selected algorithms that determine the selection of the best route during an emergency situation. The algorithm has to make sure that the selected route is the shortest and the safest route to the destination. During a disaster, there are many uncertainties to deal with in determining the shortest and safest route. Fuzzy logic can be hardly called for to deal with these uncertainties. Based on sensor data, this paper will also discuss how to solve shortest path problem using a fuzzy number.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel scheme for reliable multipath routing through node independent direct...eSAT Journals
Abstract Multipath routing is essential in the wake of voice over IP, multimedia streaming for efficient data transmission. The growing usage of such network requirements also demands fast recovery from network failures. Multipath routing is one of the promising routing schemes to accommodate the diverse requirements of the network with provision such as load balancing and improved bandwidth. Cho et al. introduced a resilient multipath routing scheme known as directed acyclic graphs. These graphs enable multipath routing with all possible edges while ensuring guaranteed recovery from single point of failures. We also built a prototype application that demonstrates the efficiency of the scheme. The simulation results revealed that the scheme is useful and can be used in real world network applications.
Index Terms – Multipath routing, failure recovery, directed acyclic graphs
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
A Dependent Set Based Approach for Large Graph AnalysisEditor IJCATR
Now a day’s social or computer networks produced graphs of thousands of nodes & millions of edges. Such Large graphs
are used to store and represent information. As it is a complex data structure it requires extra processing. Partitioning or clustering
methods are used to decompose a large graph. In this paper dependent set based graph partitioning approach is proposed which
decomposes a large graph into sub graphs. It creates uniform partitions with very few edge cuts. It also prevents the loss of
information. The work also focuses on an approach that handles dynamic updation in a large graph and represents a large graph in
abstract form.
Automated Piecewise-Linear Fitting of S-Parameters step-response (PWLFIT) for...Piero Belforte
An innovative full time-domain macromodeling
technique for general, linear multiport systems is described. The
methodology is defined in a digital wave framework and timedomain
simulations are performed via an efficient method called
Segment Fast Convolution (SFC). It is based on a piecewiseconstant
(PWC) model of the impulse response of scattering
parameters, computed starting from a piecewise-linear fitting
of their step response (PWLFIT). Such step response is directly
available from time-domain reflectometer measurements
(TDR/TDT) or equivalent simulations. The model-building phase
is performed in a fast automated framework and an analytic
formulation of computational efficiency of the SFC with respect to
the standard time-domain convolution is given. Two application
examples are used to verify the PWLFIT performance and to
perform a comparison with macromodeling methods defined in
the frequency-domain, such as Vector Fitting (VF).
Index Terms—Digital wave models, time-domain macromodeling,
S-parameters, step response.
A surrogate-assisted modeling and optimization method for planning communicat...Power System Operation
The development of industrial informatization stimulates
the implementation of cyber-physical system (CPS) in
distribution network. As a close integration of the power
network infrastructure with cyber system, the research
of design methodology and tools for CPS has gained
wide spread interest considering the heterogeneous
characteristic. To address the problem of planning
communication system in distribution network CPS, at
first, this paper proposed an optimization model utilizing
topology potential equilibrium. The mutual influence
of nodes and the spatial distribution of topological
structure is mathematically described. Then, facing the
complex optimization problem in binary space with
multiple constraints, a novel binary bare bones fireworks
algorithm (BBBFA) with a surrogate-assisted model
is proposed. In the proposed algorithm, the surrogate
model, a back propagation neural network, replaces the
complex constraints by incremental approximation of
nonlinear constraint functions for reducing the difficulty
in finding the optimal solution. The communication
system planning of IEEE 39-bus power system,
which comprises four terminal units, was optimized.
Considering the different heterogeneous degrees,
four programs were involved in planning for practical
considerations. The simulation results of the proposed
algorithm were compared with other representative
methods, which demonstrated the effective performance
of the proposed method to solve communication system
planning for optimizing problems of distribution
network.
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
Co-Simulation Interfacing Capabilities in Device-Level Power Electronic Circu...IJPEDS-IAES
Power electronic circuit simulation today has become increasingly more demanding in both
the speed and accuracy. Whilst almost every simulator has its own advantages and disadvantages,
co-simulations are becoming more prevalent. This paper provides an overview of
the co-simulation capabilities of device-level circuit simulators. More specifically, a listing
of device-level simulators with their salient features are compared and contrasted. The
co-simulation interfaces between several simulation tools are discussed. A case study is
presented to demonstrate the co-simulation between a device-level simulator (PSIM) interfacing
a system-level simulator (Simulink), and a finite element simulation tool (FLUX).
Results demonstrate the necessity and convenience as well as the drawbacks of such a comprehensive
simulation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Data detection method for uplink massive MIMO systems based on the long recu...IJECEIAES
Although the mean square error (MMSE) approach is recognized to be near optimal for uplinking large-scale multiple-input-multiple-output (MIMO) systems, there are certain difficulties in the procedure related to matrix inversion. The long recurrence enlarged conjugate gradient (LRE-CG) approach is proposed in this study as a way to iteratively realize the MMMS algorithm while avoiding the complications of matrix inversion. In addition, a diagonal-approximate starting solution to the LRE-CG approach was used to speed up the conversion rate and reduce the complications required. It has been discovered that the LRE-CG-based approach has the ability to significantly reduce computational complexity. By comparing simulation results, it is clear that this new methodology surpasses well-established ways like the Neumann series approximation-based method and the Gauss-Siedel iterative method. With a small number of iterations, the suggested approach achieves near-optimal performance of a standard MMSE algorithm.
Comprehensive Performance Evaluation on Multiplication of Matrices using MPIijtsrd
In Matrix multiplication we refer to a concept that is used in technology applications such as digital image processing, digital signal processing and graph problem solving. Multiplication of huge matrices requires a lot of computing time as its complexity is O n3 . Because most engineering science applications require higher computational throughput with minimum time, many sequential and analogue algorithms are developed. In this paper, methods of matrix multiplication are elect, implemented, and analyzed. A performance analysis is evaluated, and some recommendations are given when using open MP and MPI methods of parallel of latitude computing. Adamu Abubakar I | Oyku A | Mehmet K | Amina M. Tako ""Comprehensive Performance Evaluation on Multiplication of Matrices using MPI""
Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30015.pdf
Paper Url : https://www.ijtsrd.com/engineering/electrical-engineering/30015/comprehensive-performance-evaluation-on-multiplication-of-matrices-using-mpi/adamu-abubakar-i
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Digital Wave Simulation of Quasi-Static Partial Element Equivalent Circuit Me...Piero Belforte
This is an extended version of the paper published on IEEE Transactions on EMC, October 2016. PEEC modeling is a well established technique for obtaining a circuit equivalent for an electromagnetic problem. The time domain solution of such models is usually performed using nodal voltages and branch currents, or sometimes charge and currents. The present paper describes a possible alternative approach which can be obtained expressing and solving the problem in the waves domain. The digital wave theory is used to find an equivalent representation of the PEEC circuit in the wave domain. Through a pertinent continuous to discrete time transformation, the constitutive relations for partial inductances, capacitances and resistances are translated in an explicit form. The combination of such equations with Kirchhoff laws allows to achieve a semi-explicit resolution scheme. Three different physical configurations are analyzed and their extracted Digital Wave PEEC models are simulated at growing sizes using the general-purpose Digital Wave Simulator (DWS). The results are compared to those obtained by using standard SPICE simulators in both linear and nonlinear cases. When the size of the model is manageable by SPICE, an excellent accuracy and a speed-up factor of up to three orders of magnitude are observed with much lower memory requirements. PEEC model sizes manageable by DWS are also an order of magnitude larger than SPICE. A comparative analysis of results including the effect of parameters like the simulation time step choice is also presented.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of decision problems under uncertainty. The exact approaches for computing decision based on possibilistic networks are limited by the size of the possibility distributions.
Generally, these approaches are based on possibilistic propagation algorithms. An important step in the computation of the decision is the transformation of the DAG into a secondary structure, known as the junction trees. This transformation is known to be costly and represents a difficult problem. We propose in this paper a new approximate approach for the computation
of decision under uncertainty within possibilistic networks. The computing of the optimal optimistic decision no longer goes through the junction tree construction step. Instead, it is performed by calculating the degree of normalization in the moral graph resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying its preferences.
2-DOF Block Pole Placement Control Application To: Have-DASH-IIBITT MissileZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
2-DOF Block Pole Placement Control Application To: Have-DASH-IIBITT MissileZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
Branch and Bound technique (B&B) is commonly used for intelligent search in finding a set of integer solutions within a space of interest. The corresponding binary tree structure provides a natural parallelism allowing concurrent evaluation of sub-problems using parallel computing technology. Flower pollination Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of Flower pollination Meta-heuristic Algorithm, (FPPSO), for solving integer programming problems. The proposed algorithm combines the standard flower pollination algorithm (FP) with the particle swarm optimization (PSO) algorithm to improve the searching accuracy. Numerical results show that the FPPSO is able to obtain the optimal results in comparison to traditional methods (branch and bound) and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm.Branch and bound, flower pollination Algorithm; meta-heuristics; optimization; the particle swarm optimization; integer programming.
ENERGY PERFORMANCE OF A COMBINED HORIZONTAL AND VERTICAL COMPRESSION APPROACH...IJCNCJournal
Energy efficiency is an essential issue to be reckoned in wireless sensor networks development. Since the low-powered sensor nodes deplete their energy in transmitting the collected information, several strategies have been proposed to investigate the communication power consumption, in order to reduce the amount of transmitted data without affecting the information reliability. Lossy compression is a promising solution recently adapted to overcome the challenging energy consumption, by exploiting the data correlation and discarding the redundant information. In this paper, we propose a hybrid compression approach based on two dimensions specified as horizontal (HC) and vertical compression (VC), typically implemented in cluster-based routing architecture. The proposed scheme considers two key performance metrics, energy expenditure, and data accuracy to decide the adequate compression approach based on HC-VC or VC-HC configuration according to each WSN application requirement. Simulation results exhibit the performance of both proposed approaches in terms of extending the clustering network lifetime.
2-DOF BLOCK POLE PLACEMENT CONTROL APPLICATION TO:HAVE-DASH-IIBTT MISSILEZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Skye Residences | Extended Stay Residences Near Toronto Airportmarketingjdass
Experience unparalleled EXTENDED STAY and comfort at Skye Residences located just minutes from Toronto Airport. Discover sophisticated accommodations tailored for discerning travelers.
Website Link :
https://skyeresidences.com/
https://skyeresidences.com/about-us/
https://skyeresidences.com/gallery/
https://skyeresidences.com/rooms/
https://skyeresidences.com/near-by-attractions/
https://skyeresidences.com/commute/
https://skyeresidences.com/contact/
https://skyeresidences.com/queen-suite-with-sofa-bed/
https://skyeresidences.com/queen-suite-with-sofa-bed-and-balcony/
https://skyeresidences.com/queen-suite-with-sofa-bed-accessible/
https://skyeresidences.com/2-bedroom-deluxe-queen-suite-with-sofa-bed/
https://skyeresidences.com/2-bedroom-deluxe-king-queen-suite-with-sofa-bed/
https://skyeresidences.com/2-bedroom-deluxe-queen-suite-with-sofa-bed-accessible/
#Skye Residences Etobicoke, #Skye Residences Near Toronto Airport, #Skye Residences Toronto, #Skye Hotel Toronto, #Skye Hotel Near Toronto Airport, #Hotel Near Toronto Airport, #Near Toronto Airport Accommodation, #Suites Near Toronto Airport, #Etobicoke Suites Near Airport, #Hotel Near Toronto Pearson International Airport, #Toronto Airport Suite Rentals, #Pearson Airport Hotel Suites
What is the TDS Return Filing Due Date for FY 2024-25.pdfseoforlegalpillers
It is crucial for the taxpayers to understand about the TDS Return Filing Due Date, so that they can fulfill your TDS obligations efficiently. Taxpayers can avoid penalties by sticking to the deadlines and by accurate filing of TDS. Timely filing of TDS will make sure about the availability of tax credits. You can also seek the professional guidance of experts like Legal Pillers for timely filing of the TDS Return.
Buy Verified PayPal Account | Buy Google 5 Star Reviewsusawebmarket
Buy Verified PayPal Account
Looking to buy verified PayPal accounts? Discover 7 expert tips for safely purchasing a verified PayPal account in 2024. Ensure security and reliability for your transactions.
PayPal Services Features-
🟢 Email Access
🟢 Bank Added
🟢 Card Verified
🟢 Full SSN Provided
🟢 Phone Number Access
🟢 Driving License Copy
🟢 Fasted Delivery
Client Satisfaction is Our First priority. Our services is very appropriate to buy. We assume that the first-rate way to purchase our offerings is to order on the website. If you have any worry in our cooperation usually You can order us on Skype or Telegram.
24/7 Hours Reply/Please Contact
usawebmarketEmail: support@usawebmarket.com
Skype: usawebmarket
Telegram: @usawebmarket
WhatsApp: +1(218) 203-5951
USA WEB MARKET is the Best Verified PayPal, Payoneer, Cash App, Skrill, Neteller, Stripe Account and SEO, SMM Service provider.100%Satisfection granted.100% replacement Granted.
Unveiling the Secrets How Does Generative AI Work.pdfSam H
At its core, generative artificial intelligence relies on the concept of generative models, which serve as engines that churn out entirely new data resembling their training data. It is like a sculptor who has studied so many forms found in nature and then uses this knowledge to create sculptures from his imagination that have never been seen before anywhere else. If taken to cyberspace, gans work almost the same way.
What are the main advantages of using HR recruiter services.pdfHumanResourceDimensi1
HR recruiter services offer top talents to companies according to their specific needs. They handle all recruitment tasks from job posting to onboarding and help companies concentrate on their business growth. With their expertise and years of experience, they streamline the hiring process and save time and resources for the company.
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Sustainability has become an increasingly critical topic as the world recognizes the need to protect our planet and its resources for future generations. Sustainability means meeting our current needs without compromising the ability of future generations to meet theirs. It involves long-term planning and consideration of the consequences of our actions. The goal is to create strategies that ensure the long-term viability of People, Planet, and Profit.
Leading companies such as Nike, Toyota, and Siemens are prioritizing sustainable innovation in their business models, setting an example for others to follow. In this Sustainability training presentation, you will learn key concepts, principles, and practices of sustainability applicable across industries. This training aims to create awareness and educate employees, senior executives, consultants, and other key stakeholders, including investors, policymakers, and supply chain partners, on the importance and implementation of sustainability.
LEARNING OBJECTIVES
1. Develop a comprehensive understanding of the fundamental principles and concepts that form the foundation of sustainability within corporate environments.
2. Explore the sustainability implementation model, focusing on effective measures and reporting strategies to track and communicate sustainability efforts.
3. Identify and define best practices and critical success factors essential for achieving sustainability goals within organizations.
CONTENTS
1. Introduction and Key Concepts of Sustainability
2. Principles and Practices of Sustainability
3. Measures and Reporting in Sustainability
4. Sustainability Implementation & Best Practices
To download the complete presentation, visit: https://www.oeconsulting.com.sg/training-presentations
Putting the SPARK into Virtual Training.pptxCynthia Clay
This 60-minute webinar, sponsored by Adobe, was delivered for the Training Mag Network. It explored the five elements of SPARK: Storytelling, Purpose, Action, Relationships, and Kudos. Knowing how to tell a well-structured story is key to building long-term memory. Stating a clear purpose that doesn't take away from the discovery learning process is critical. Ensuring that people move from theory to practical application is imperative. Creating strong social learning is the key to commitment and engagement. Validating and affirming participants' comments is the way to create a positive learning environment.
Enterprise Excellence is Inclusive Excellence.pdfKaiNexus
Enterprise excellence and inclusive excellence are closely linked, and real-world challenges have shown that both are essential to the success of any organization. To achieve enterprise excellence, organizations must focus on improving their operations and processes while creating an inclusive environment that engages everyone. In this interactive session, the facilitator will highlight commonly established business practices and how they limit our ability to engage everyone every day. More importantly, though, participants will likely gain increased awareness of what we can do differently to maximize enterprise excellence through deliberate inclusion.
What is Enterprise Excellence?
Enterprise Excellence is a holistic approach that's aimed at achieving world-class performance across all aspects of the organization.
What might I learn?
A way to engage all in creating Inclusive Excellence. Lessons from the US military and their parallels to the story of Harry Potter. How belt systems and CI teams can destroy inclusive practices. How leadership language invites people to the party. There are three things leaders can do to engage everyone every day: maximizing psychological safety to create environments where folks learn, contribute, and challenge the status quo.
Who might benefit? Anyone and everyone leading folks from the shop floor to top floor.
Dr. William Harvey is a seasoned Operations Leader with extensive experience in chemical processing, manufacturing, and operations management. At Michelman, he currently oversees multiple sites, leading teams in strategic planning and coaching/practicing continuous improvement. William is set to start his eighth year of teaching at the University of Cincinnati where he teaches marketing, finance, and management. William holds various certifications in change management, quality, leadership, operational excellence, team building, and DiSC, among others.
Business Valuation Principles for EntrepreneursBen Wann
This insightful presentation is designed to equip entrepreneurs with the essential knowledge and tools needed to accurately value their businesses. Understanding business valuation is crucial for making informed decisions, whether you're seeking investment, planning to sell, or simply want to gauge your company's worth.
Attending a job Interview for B1 and B2 Englsih learnersErika906060
It is a sample of an interview for a business english class for pre-intermediate and intermediate english students with emphasis on the speking ability.
Affordable Stationery Printing Services in Jaipur | Navpack n PrintNavpack & Print
Looking for professional printing services in Jaipur? Navpack n Print offers high-quality and affordable stationery printing for all your business needs. Stand out with custom stationery designs and fast turnaround times. Contact us today for a quote!
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...BBPMedia1
Grote partijen zijn al een tijdje onderweg met retail media. Ondertussen worden in dit domein ook de kansen zichtbaar voor andere spelers in de markt. Maar met die kansen ontstaan ook vragen: Zelf retail media worden of erop adverteren? In welke fase van de funnel past het en hoe integreer je het in een mediaplan? Wat is nu precies het verschil met marketplaces en Programmatic ads? In dit half uur beslechten we de dilemma's en krijg je antwoorden op wanneer het voor jou tijd is om de volgende stap te zetten.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
2. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
2) the calculation of a corresponding set of reduced-order
models with common order using a common compact
projection matrix, following the technique [20];
3) the computation of the PC expansion of the reduced
models;
4) the calculation of a PC-based augmented system.
This new proposed technique is able to overcome the
previously mentioned limitations by first calculating a set of
reduced-order models with common order using a common
compact projection matrix and then computing the PC
expansion of the reduced models.
This paper is structured as follows. First, an overview of PC
theory is given in Section II. The proposed stochastic MOR
technique is described in Section III, whereas its validation is
performed in Section IV by means of two pertinent numerical
examples. Conclusions are summed up in Section V.
II. PC PROPERTIES
A stochastic process H with finite variance can be expressed
by means of the PC expansion as [8]
H =
∞
i=0
αi ϕi(ξ) (1)
where the terms αi are called PC coefficients and ϕi(ξ)
are the corresponding orthogonal polynomials that depend
on the normalized random variables in the vector ξ. The
polynomials ϕi(ξ), also called basis functions, satisfy the
following orthogonality condition [6]:
ϕi(ξ), ϕj (ξ) = ϕi(ξ)ϕj (ξ)W(ξ)dξ = aiδij (2)
where ai are positive numbers, δij is the Kronecker delta, and
W(ξ) is a probability measure with support .
Obviously, (1) must be truncated up to a finite number
of polynomials M + 1 to be used for practical applications.
In particular, if the random variables ξ are independent, it is
possible to prove that
M + 1 =
(N + P)!
N!P!
(3)
where N is the number of random variables ξi in the vector
ξ and P is the maximum order of the polynomials used in
the truncated PC expansion. Indeed, for independent random
variables, the probability measure W(ξ) can be written as
W(ξ) =
N
i=1
Wi (ξi) (4)
since the global probability density function (PDF) is the prod-
uct of the PDFs of the single random variables. Consequently,
the corresponding orthogonal polynomials can be expressed as
product combination of the basis functions corresponding to
each random variable ξi as
ϕj (ξ)=
N
k=1
φik (ξk) with
N
k=1
ik ≤ P and 0≤ j ≤ M. (5)
Furthermore, if the random variables ξ have specific PDFs
(i.e., Gaussian, Uniform, and Beta distribution), the cor-
responding basis functions are the polynomials of the
Wiener–Askey scheme [4]. Indeed, the choice of these
polynomials guarantee an exponential convergence rate of
the PC expansion, since the associated probability mea-
sure W(ξ) corresponds to the PDF of the random vari-
able ξ, when placed in a standard form [4], [6]. Finally,
in [6], it is presented a numerical method to calculate
the basis functions that guarantee an exponential conver-
gence rate for independent random variables with arbitrary
distributions.
In the more general case of correlated random variables
with arbitrary PDFs, the basis functions can be calculated
following the approach described in [5]–[7]. In this case, the
PC expansion convergence rate may not be exponential, since a
variable transformation, such as the Nataf transformation [21]
or the Karhunen–Loéve expansion [22], is needed to obtain
the decorrelation.
Once the M + 1 basis functions ϕi(ξ) are calculated, (1)
can be truncated as
H ≈
M
i=0
αi ϕi(ξ) (6)
where the only unknown terms are the PC coefficients αi that
can be calculated following one of the two main methods
described in the literature: the spectral projection and the linear
regression technique [7].
The most interesting characteristic of the PC expansion is
the analytical representation of the system variability: complex
stochastic functions of H, such as the PDF, can be efficiently
calculated following standard analytical formulas or numerical
schemes [23], whereas the mean μ and the variance σ2 of H
can be expressed as [7]
μ = α0 (7)
σ2
=
M
i=1
α2
i ϕi(ξ), ϕi(ξ) . (8)
Finally, if the stochastic process is written in a matrix form H,
the corresponding PC expansion is
H ≈
M
i=0
αiϕi(ξ) (9)
where the terms αi are matrices of PC coefficients, corre-
sponding to the i-th polynomial basis, calculated for each entry
of H. For an extensive reference to PC theory, the reader may
consult [4]–[8].
III. STOCHASTIC MOR
The proposed technique aims to perform the VA of a
generic multiport system represented by a descriptor state-
space form as
(sC (ξ) + G (ξ)) X (s, ξ) = B (ξ) (10)
H (s, ξ) = LT
(ξ) X (s, ξ) (11)
where the descriptor state-space matrices C, G, B, L, which
depend on a vector of random variables ξ, are large matrices
3. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 3
calculated by an EM method, such as the partial element
equivalent circuit (PEEC) technique [2]. The superscript T
represents the matrix transpose. The dimensionality of descrip-
tor state-space matrices in (10) and (11) is C ∈ RZ×Z ,
G ∈ RZ×Z , B ∈ RZ×Np , L ∈ RZ×Np , where Z is the num-
ber of state-vector unknowns and depends on the particular
EM method used to compute (10) and (11), and Np represent
the number of ports of the system. In some recent contributions
[24]–[26], it is proven that is possible to calculate efficiently
the PC expansion of the system starting from the PC expansion
of the corresponding model (state-space models in [24] and
transmission line models in [25] and [26]). Theoretically, a
similar approach could be used for systems described by (10)
and (11). Indeed, using the PC expansion (9) to express the
state-space matrices, the state-vector and the output in (10)
and (11) yield
s
M
i=0
M
j=0
Ci X j (s)ϕi(ξ)ϕj (ξ)
= −
M
i=0
M
j=0
Gi X j (s)ϕi (ξ)ϕj (ξ)+
M
i=0
Biϕi(ξ) (12)
M
j=0
H j (s)ϕj (ξ) =
M
i=0
M
j=0
LT
i X j (s)ϕi (ξ)ϕj (ξ). (13)
Let us assume for the moment that the PC expansion of
the state-space matrices in (12) and (13) is already cal-
culated. Therefore, the only unknowns are the PC coeffi-
cient matrices of the state vector X j and of the transfer
function H j .
The desired PC coefficient matrices can be calculated by
projecting (12) and (13) on each basis function ϕp(ξ) for
p = 0, . . . , M (this procedure is referred as Galerkin pro-
jections [4], [25] in the PC theory). Indeed, projecting (12) on
the basis function ϕp(ξ) yields
s
M
i=0
M
j=0
Ci X j (s) ϕi ϕj , ϕp
= −
M
i=0
M
j=0
Gi X j (s) ϕi ϕj , ϕp +
M
i=0
Bi ϕi, ϕp (14)
where the explicit dependence on the vector ξ is omitted, for
the sake of clarity. Repeating this operation for p = 0, . . . , M
gives a frequency-dependent linear system of the form
X Xα = Bα (15)
where X ∈ R(M+1)Z×(M+1)Z , Xα ∈ R(M+1)Z×Np , and
Bα ∈ R(M+1)Z×Np .
To describe how it is possible to obtain (15), let us assume
for simplicity that one random variable and two basis functions
are used for the PC expansion. In this simplified case, (14) can
be rewritten as
(sC0+G0) X0ϕ0ϕ0+(sC1+G1) X0ϕ1ϕ0
+(sC0 + G0) X1ϕ0ϕ1+(sC1+G1) X1ϕ1ϕ1 = B0ϕ0+ B1ϕ1.
(16)
Now, because of the orthogonality relation (2), the projection
of (16) onto the basis function ϕ0 gives
E0(s)X0 ϕ0ϕ0, ϕ0 + E1(s)X0 ϕ1ϕ0, ϕ0
+E0(s)X1 ϕ0ϕ1, ϕ0 + E1(s)X1 ϕ1ϕ1, ϕ0
= B0 ϕ0, ϕ0 (17)
where Ei(s) = (sCi + Gi) for i = 0, 1. The projection of
(16) onto the basis function ϕ1 yields
E0(s)X0 ϕ0ϕ0, ϕ1 + E1(s)X0 ϕ1ϕ0, ϕ1
+E0(s)X1 ϕ0ϕ1, ϕ1 + E1(s)X1 ϕ1ϕ1, ϕ1
= B1 ϕ1, ϕ1 . (18)
Next, (17) and (18) can be rewritten in the form (15) as
X00 X01
X10 X11
X0
X1
=
B0
B1
(19)
where
X00 = E0(s)
ϕ0ϕ0, ϕ0
ϕ0, ϕ0
+ E1(s)
ϕ1ϕ0, ϕ0
ϕ0, ϕ0
X01 = E0(s)
ϕ0ϕ1, ϕ0
ϕ0, ϕ0
+ E1(s)
ϕ1ϕ1, ϕ0
ϕ0, ϕ0
X10 = E0(s)
ϕ0ϕ0, ϕ1
ϕ1, ϕ1
+ E1(s)
ϕ1ϕ0, ϕ1
ϕ1, ϕ1
X11 = E0(s)
ϕ0ϕ1, ϕ1
ϕ1, ϕ1
+ E1(s)
ϕ1ϕ1, ϕ1
ϕ1, ϕ1
. (20)
The frequency-dependent system (19) can now be solved for
each frequency of interest, upon calculation of the scalar
products in (20). Finally, the PC coefficients of the output
H j (s) can be computed using the PC coefficients of state
vector X j (s). Indeed, the projection of (13) onto the basis
functions ϕp(ξ), p = 0, . . . , M, leads to
H p(s) =
M
i=0
M
j=0
LT
i X j (s)
ϕi (ξ)ϕj (ξ), ϕp(ξ)
ϕp(ξ), ϕp(ξ)
(21)
where all the scalar products were already calculated in order
to build the matrix X .
However, the approach described above cannot be efficiently
used for systems described by (10) and (11). The calculation of
the PC expansion (9) for the large matrices C, G, B, L would
be very expensive both in terms of memory and computational
time, since each corresponding PC coefficient matrix would
have the same dimension of the original matrix. Furthermore,
the PC expansion of the matrices C, G, B, L would lead
to an augmented system in the form (15) of such a high
dimension that the computational cost required to solve it may
compromise the efficiency of the PC expansion with respect
to the MC analysis.
The previously developed techniques [11]–[13] partially
solve these issues for systems described by Helmholtz equa-
tions. Indeed, in [11]–[13], it is proposed to evaluate first the
original large system of equations over a discrete set of points
in the design space and then employ a deterministic MOR
technique for each system of equations to generate the corre-
sponding projection matrix. Then, it is calculated a PC model
of the original large system matrices and of the projection
4. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
matrix. It is important to notice that, following the approaches
described in [11]–[13], all the projection matrices calculated
for the initial discrete set of points must have common
dimensions, otherwise it is not possible to compute the cor-
responding PC model of the projection matrix via numerical
integration [13, eq. (33)]. Indeed, it is not possible to calculate
a summation of matrices with different dimensions. Finally,
it is performed the computation of the PC coefficients of the
reduced system that leads to an augmented system in a form
similar to (15), but the overall dimension of this augmented
system is drastically reduced. A compact PC expansion of the
original system can be now calculated by employing standard
deterministic techniques to solve the obtained augmented sys-
tem. This approach is accurate and efficient with respect to the
MC analysis (that has an extremely high computational cost,
since it requires a huge number of simulation of the original
large scale model). Furthermore, the techniques [11]–[13] offer
the possibility to use different MOR techniques to calculate
the corresponding reduced-order PC models. However, the
techniques [11]–[13] can be expensive both in terms of
memory and computational time since it is required:
1) to calculate a PC model of the original large scale
equations;
2) to calculate a PC model of the projection matrices.
The novel method described in this paper is able to over-
come these limitations by first calculating a set of reduced-
order models with common order using a common compact
projection matrix, following the technique [20], and then com-
puting the PC expansion of the reduced system. In particular,
the method described in [20] is implemented using a worst
case choice for the estimation of the reduced model order and
a global approach to build the common compact projection
matrix. In the following, we will describe in details the novel
proposed method.
Note that, fixing the value of the random variables ξ = ξ
in (10) and (11) yields
(sC(ξ) + G(ξ))X(s, ξ) = B(ξ) (22)
H(s, ξ) = LT
(ξ)X(s, ξ). (23)
Now, it is possible to calculate an equivalent reduced-order
models as
(sC(ξ) + G(ξ))X(s, ξ) = B(ξ) (24)
H(s, ξ) = L
T
(ξ)X(s, ξ) (25)
where the reduced-order matrices, indicated with the super-
script ∧, can be calculated by means of a suitable projection
matrix F as
C(ξ) = FT
C(ξ)F (26)
G(ξ) = FT
G(ξ)F (27)
B(ξ) = FT
B(ξ) (28)
L(ξ) = FT
L(ξ). (29)
The projection matrix F can be calculated using a MOR
technique, such as the Krylov-based Laguerre-singular value
decomposition (SVD) [27] or passive reduced-order inter-
connect macromodeling algorithm (PRIMA) [28] algorithms.
Therefore, it is possible to calculate for each combination of
values of the random variables ξ in the stochastic space
the corresponding reduced system in a descriptor state-space
form. Let us suppose that we have calculated K set of reduced
matrices [Ck, Gk, Bk, Lk]K
k=1 with common dimension for
the corresponding values of the random variables [ξk]K
k=1
(initial sampling) in the stochastic space . To evaluate the
common order for all the K reduced models that will lead
to accurate results, first we calculate a reduced-order model
only for the set of U corner points [ξu]U
u=1 where U ⊂ K,
aiming at minimizing the error with respect to the system
frequency response H(s, ξu) calculated with the original large-
dimension system matrices C(ξu), G(ξu), B(ξu), L(ξu), for
u = 1, . . . , U. Next, we compute the corresponding set of K
projection matrices of common order Fk for k = 1, . . . , K.
Finally, all the projection matrices calculated so far are stacked
in a projection matrix as
FUnion = [F1, F2, . . . , FK ]. (30)
The accuracy of the K reduced models with common order,
which is estimated using the U corner points, can be verified
by comparing the corresponding frequency responses with
respect to the system frequency responses calculated using the
K original large-dimension system matrices. If, for a particular
example, the desired accuracy cannot be achieved using only
the U corner points for the order evaluation, it is always
possible to estimate the common order using all the K initial
samples. However, the choice of using corners to estimate the
common order has proven to be accurate in many cases and
allows to save computational time [20].
Now, it is necessary to compute a common projection matrix
to be able to calculate a parametric reduced-order model over
the design points [ξk]K
k=1. This goal is achieved in two steps.
First, the SVD decomposition of the projection matrix FUnion
is calculated as
U VT
= svd (FUnion). (31)
Second, to guarantee the compactness of the common pro-
jection matrix, it can be defined a common reduced-order
r based on the first r significant singular values, where
the individuation of the desired r significant values can be
performed by setting a threshold to the ratio of the singular
values with respect to the largest singular value. Indeed, a
common projection matrix QC can now be expressed as
QC = Ur (32)
where Ur is the matrix U resulting from the SVD decompo-
sition (31) for the first r significant singular values. Hence,
the desired reduced-order matrices with common order can be
expressed as
˜Ck(ξk) = QT
C C(ξk) QC (33)
˜Gk(ξk) = QT
C G(ξk) QC (34)
˜Bk(ξk) = QT
C B(ξk) (35)
˜Lk(ξk) = QT
C L(ξk) (36)
for k = 1, . . . , K, where the superscript ∼ represents the
reduced matrices with common order.
5. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 5
Finally, the PC model for the matrices ˜C, ˜G, ˜B, ˜L can
be computed. First of all, it is necessary to calculate the
basis functions [ϕi]M
i=0 following the approaches indicated in
Section II. Without loss of generality, let us suppose that the
random variables ξ are independent. Hence, the number of
basis function M + 1 can be chosen upfront according to (3)
considering that P can be limited between two and five [4],
[24], [26] for practical applications. Finally, the PC coefficient
matrices of ˜C, ˜G, ˜B, ˜L of (33)−(36) can be calculated by
means of the linear regression approach (Section II). The linear
regression approach calculates the desired PC coefficients
solving a suitable over-determined least-square system [7] that
for the reduced state-space matrix ˜C can be written as
α = R (37)
where ∈ RK ˜Z×(M+1) ˜Z , α ∈ R(M+1) ˜Z× ˜Z , R ∈ RK ˜Z× ˜Z ,
and ˜Z represents the order of the matrix ˜C. In particular, the
k-th row of the matrix contains the multivariate polyno-
mial basis functions ϕi for i = 0, . . . , M evaluated in ξk
for k = 1, . . . , K multiplied by the identity matrix of the
same dimension as the matrix ˜C. The corresponding set of
values of the matrix ˜Ck ξk for k = 1, . . ., K are stored
in the matrix R. Finally, the desired PC coefficients ˜Ci for
i = 0, . . . , M are collected in the matrix α. Equation (37)
for the each descriptor state-space matrices can be solved in
a least squares sense using an elementwise, columnwise, or
matrixwise approach.
Since the linear regression approach requires to solve an
over-determined linear system in the form (37), the number
of initial samples K is chosen according to the following
relation [7]:
K ≈ 2 (M + 1). (38)
At this point, we have calculated a PC model of the reduced
descriptor state-space matrices in the form (9). For example,
the matrix ˜C (ξ) can be written as
˜C (ξ) ≈
M
i=0
˜Ciϕi(ξ). (39)
Finally, it is possible to write two equations in the form
(10) and (11) for the reduced-order descriptor state-space
matrices. Applying the same procedure discussed above, it
is possible to compute a frequency-dependent linear system
in the form (15), but the overall dimension of this system is
much smaller than the corresponding one related to the use of
the original large scale matrices. Therefore, the PC expansion
of the system transfer function can be calculated following the
same procedure described above, see equations (16)–(21).
The proposed technique is flexible, since the transfer func-
tion H of a generical multiport system can be expressed
by different representations (e.g., scattering or admittance
parameters), it allows to use different MOR techniques to
calculate the reduced systems, and it offers a reduced com-
putational complexity with respect to the previous approaches
[11]–[13]. The novel proposed technique allows to perform
the VA of large-dimension systems, such as the ones resulting
from EM simulators, with accuracy and efficiency, because
of the expression of the system transfer function as a
Fig. 1. Flowchart of the proposed modeling strategy.
suitable combination of PC expansion and MOR methods.
Finally, the proposed method does not require to compute a
PC model of the projection operator. The techniques [11]–[13]
assume implicitly that the projection operator as a function of
the parameters chosen for the VA ξ can be accurately modeled
by a PC expansion. However, in our experience, the calculation
of a PC model of the projection operator can be prone to
inaccuracies, when the system under study is quite sensitive
to the parameters chosen for the VA or when the range of
variation of these parameters is large enough. The projection
operator is computed independently for each initial sample
in the stochastic space and it may not result smooth enough
as a function of the stochastic parameters ξ to be accurately
modeled by a PC model. Section IV illustrates this aspect.
However, the proposed technique has a limitation: it can still
be applied if the number of uncertain parameters is relatively
large (i.e., N = 10), but at the cost of a loss of efficiency.
Indeed, the number K of initial samples is chosen according
to (38), and the number of basis function M + 1 increases
rapidly with the number of uncertain parameter N, according
with (3). Therefore, the calculation of the K initial samples of
the large scale system can be expensive, and the computational
costs of (31) and of solving the reduced system in the form
(15) can increase as well. This limitation is originated by
the formulation of the PC expansion: the number M + 1 of
basis functions of any PC model in the form (9) increases
rapidly with the number of uncertain parameter N, according
to (3). Therefore, the corresponding number of M+1 unknown
PC coefficients that must be estimated is large.
The flowchart of the proposed approach is shown in Fig. 1.
6. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
Fig. 2. Example A. Cross section of the coupled microstrips.
IV. NUMERICAL EXAMPLES
In this Section, we show the results of the VA performed
with the proposed technique for two different structures.
In each example, the random variables in the vector ξ
are assumed independent and with uniform PDF. The cor-
responding basis functions are products of the Legendre
polynomials [5]. The scalar products resulting from the
use of the Galerkin projections are calculated analytically
beforehand.
The validation of the accuracy and efficiency of the pro-
posed technique is performed by means of a comparison
with the results of other VA techniques. The simulations are
performed using MATLAB1 R2012a on a Windows platform
equipped with an Intel Core2 Extreme CPU Q9300 2.53 GHz
and 8 GB RAM.
A. Transmission Line
In this first example, the scattering parameters of two
coupled uniform microstrip lines are considered as a stochastic
process with respect to the length of the line in the frequency
range 100 kHz–3 GHz. The cross section of the microstrip
lines is shown in Fig. 2.
The frequency independent per-unit-length parameters of
the lines are [29]
Rpul =
0.2 0
0 0.2 m
(40)
Lpul =
0.28 0.07
0.07 0.28
nH
m
(41)
Gpul =
0 0
0 0
S
m
(42)
Cpul =
0.122 −0.05
−0.05 0.122
pF
m
. (43)
Starting from the per-unit-length parameters, the correspond-
ing matrices for the admittance representation are computed
using the segmentation method described in [27] by divid-
ing the lines in 1650 sections of equal length, which gives
state-space matrices of order 6602. Then, these matrices
are converted into the corresponding ones for the scattering
representation as in [30]. The corresponding descriptor state-
space representation is
(sC + G) X(s) = B (44)
H (s) = LT
X (s) + D (45)
where the matrix D is the identity matrix of dimension
Np × Np.
1The Mathworks, Inc., Natick.
TABLE I
EXAMPLE A. COMPUTATIONAL TIME OF THE PROPOSED PC-BASED
TECHNIQUE FOR L ∈ 9.75–10.25 CM
It is important to notice that, the scattering parameters of
the two coupled microstrips can be efficiently computed using
the exact transmission line theory, therefore the VA could
be performed without the calculation of a large descriptor
state-space model. However, the calculation of a state-space
representation of order 6602 allows to verify the performances
of the novel proposed method for the case in which the SVD
decomposition (31) is computed for a very high dimension
matrix. Finally, the VA of the system in Fig. 2 is performed
with respect to the length of the lines in two different ranges
of variations and for different orders of the PC models. The
performances of the proposed method are compared with the
results of the technique [13], for the same number of initial
length samples. Note that, the reduced model in [13, eq. (16)]
are calculated by means of the Galerkin projections instead of
truncating the corresponding expansion, since the order of the
PC models used in this example is greater than one.
First, the length of the lines varies within the range
9.75–10.25 cm with a nominal value L0 = 10 cm as a
uniform random variable. The corresponding PC expansion
is calculated using P = 2 and M = 2, according to (3).
Therefore, the descriptor state-space form (44) and (45) is
computed over a regular grid of K = 6 samples. The MOR
technique PRIMA [28] is used to calculate the reduced-order
model, and a maximum absolute model error of −50 dB over
80 frequency samples is targeted to estimate the common
order of the reduced models. To build the common projector
matrix, 0.0001 is chosen as threshold to individuate the first r
significant singular values of the SVD decomposition in (31)
leading to a common projection matrix QC ∈ R6602×64.
Finally, K reduced models in a descriptor state-space form
of order 64 are computed and modeled using PC expansions
as previously discussed.
Table I shows the total computational time of the proposed
technique detailing the cost of the different operations.
7. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 7
TABLE II
EXAMPLE A. COMPUTATIONAL TIME OF THE TECHNIQUE [13]
FOR L ∈ 9.75–10.25 CM
In particular, the element “Initial Data” in Table I represents
the operation of calculating the descriptor state-space matrices
for the chosen length samples and the corresponding scattering
parameters for the two corner points. It is important to specify
that, the latter operation takes 1 h 11 min 9.7 s, whereas the
calculation of the descriptor state-space matrices for all the
six length samples requires only 29 min 24.8 s. Hence, a
MC analysis performed calculating the scattering parameters
using the corresponding descriptor state-space models would
require approximately 281 d, 3 h 10 min 33.3 s, considering
10 000 length samples. Therefore, to validate the results of
the VA for this example, the MC analysis is performed using
the exact transmission line theory. The element Projection
Operator in Table I indicates the calculation of the projection
operator with common order for all the initial length samples,
whereas the element “Reduced DSS Matrices” represents the
calculation of the reduced matrices in the form (33)–(36),
where the symbol DSS stands for descriptor state-space.
Note that, the latter operation requires only few seconds,
even if the initial descriptor state-space has order 6602.
Finally, the elements in the last three rows of Table I indicate
the calculation of the PC model of the reduced descriptor
state-space matrices using the linear regression method, the
computation of the PC model of the scattering parameters
solving an augmented linear system for the reduced state-
space matrices in the form (15), and the computation of the
mean and the variance of the magnitude of the scattering
parameters using the corresponding PC model, respectively.
Table II shows the computational time for the VA per-
formed with the technique [13], in a similar form of Table I.
Again, to estimate the common order of the projection
operators calculated for all the initial length samples, −50
dB is assumed as maximum absolute model error. Note
that, the element “PC Model Initial DSS Matrices and Proj.
Op.” in Table II describes the calculation through numerical
TABLE III
EXAMPLE A. MEMORY REQUIREMENTS OF THE PROPOSED PC-BASED
TECHNIQUE AND OF THE TECHNIQUE [13] FOR L ∈ 9.75–10.25 CM
integration of the PC model of the initial descriptor state-space
matrices and projection operator.
In this case, the novel proposed method is more efficient in
terms of computational time with respect to the technique [13].
Indeed, the calculation of the SVD decomposition of the
projection operator (31) is much more efficient than the
calculation of the PC model of the initial descriptor state-space
matrices and projection operator required by [13], see Tables I
and II, even if just three PC coefficients must be estimated via
numerical integration for [13].
Furthermore, the novel proposed PC method is much more
efficient in terms of memory requirements, as shown in
Table III. As memory requirements, we indicate the amount
of megabytes (Mbytes) that are necessary to load a particular
matrix in the computer RAM. In particular, the memory
necessary to store the descriptor state-space matrices for all
the initial length samples and the corresponding projection
operators is the same for both the techniques, see the ele-
ments “Original DSS Matrices and Projection Operator” in
Table III, whereas the computation of the reduced matrices
(33)–(36) using the common projection operator (32) is much
more efficient than the calculation of the PC models of the
descriptor state-space matrices and of the projection operator,
see elements “Common Proj. Op.” (32) and “Reduced DSS
Matrices” (33)–(36), and “PC Model Initial DSS Matrices”
and “Projection Operator” in Table III, respectively.
Finally, the proposed PC-based stochastic MOR technique
shows an excellent accuracy compared with the MC analy-
sis, performed over 10 000 L samples, in computing system
variability features, as shown in Figs. 3 and 4. In particular,
Figs. 3 and 4 show the mean and standard deviation of the
magnitude of the element S32 obtained with the MC analysis,
proposed PC method, and technique [13]. Similar results can
be obtained for the other entries of the scattering matrix.
Next, the lines length is considered as a uniform random
variable varying in the range 9–11 cm. An example of the
scattering parameters variability with respect to the chosen
random variable is given in Fig. 5. The reduced-order models
are calculated again using the MOR technique PRIMA [28],
8. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
Fig. 3. Example A. Top: comparison between the mean of the magnitude
of S32 obtained with the MC analysis (full black line), proposed PC-based
method (red dotted line), and technique [13] (green dashed line). Bottom:
absolute error of the two PC-based VA techniques with respect to the MC
analysis.
Fig. 4. Example A. Top: comparison between the standard deviation of the
magnitude of S32 obtained with the MC analysis (full black line), proposed
PC-based method (red dotted line), and technique [13] (green dashed line).
Bottom: absolute error of the two PC-based VA techniques with respect to
the MC analysis.
targeting an absolute model error of −50 dB over 80 frequency
samples. The common projector matrix, obtained choosing
0.0001 as threshold to individuate the first r significant sin-
gular values of the SVD decomposition in (31), has order
QC ∈ R6602×72.
The proposed PC-based stochastic MOR technique is more
efficient both in terms of memory and computational require-
ments with respect to technique [13], see Tables IV–VI, and
it shows a great accuracy compared with the classical MC
analysis, performed over 10 000 L samples, in computing
system variability features, as shown in Figs. 6–8. In particular,
Figs. 6 and 7 show the mean and standard deviation of the
magnitude of the element S43 obtained with the MC analysis,
proposed PC method, and technique [13]. Similar results can
be obtained for the other entries of the scattering matrix.
It is important to notice that, in this case, the results of the
VA performed using the technique [13] are inaccurate with
Fig. 5. Example A. Variability of the magnitude of S12 calculated for
L ∈ 9–11 cm. Blue lines: results of the MC simulations. Red thick line:
nominal value for L0 = 10 cm.
TABLE IV
EXAMPLE A. COMPUTATIONAL TIME OF THE PROPOSED
PC-BASED TECHNIQUE FOR L ∈ 9–11 CM
respect to the MC analysis performed using the transmission
lines theory over 10 000 L samples (Figs. 6 and 7). Indeed, the
calculation via numerical integration of the PC model of the
projection operator is inaccurate (Fig. 9). It is worth to notice
that the calculation of the PC model of the original descriptor
state-space matrices and of the projection operator is two times
more expensive with respect to the previous case, see Tables II
and V, and it requires much more memory (Tables III and VI).
B. Bended Conductors
In this second example, a system of three bended conductors
in free space has been modeled. Its layout is shown in Fig. 10.
The copper conductors, placed at a distance of S0 = 2 mm
the one from the other, have width W0 = 0.5 mm and length
9. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 9
TABLE V
EXAMPLE A. COMPUTATIONAL TIME OF THE
TECHNIQUE [13] FOR L ∈ 9–11 CM
TABLE VI
EXAMPLE A. MEMORY REQUIREMENTS OF THE PROPOSED PC-BASED
TECHNIQUE AND OF THE TECHNIQUE [13] FOR L ∈ 9–11 CM
L0 = 5 mm. The copper conductivity is assumed equal to
5.8 · 107 S/m.
The system admittance parameters are considered as a
stochastic process first with respect to the couple of random
variables (L, S) varying in a range of ±10% with respect to
the central values previously mentioned. The admittance repre-
sentation is evaluated in the frequency range 100 kHz–5 GHz
using the PEEC method [31] over a grid composed of
4 × 4 (L, S) samples for the geometrical parameters. The
corresponding set of descriptor state-space matrices computed
has order 2124 for each initial (L, S) sample. The Laguerre-
SVD MOR technique [27] is used to calculate the reduced-
order models. The evaluation of the common order of the
reduced models is performed assuming 0.0001 as maximum
weighted rms error between the admittance parameters of
Fig. 6. Example A. Top: comparison between the mean of the magnitude
of S12 obtained with the MC analysis (full black line), proposed PC-based
method (red dotted line), and technique [13] (green dashed line). Bottom:
absolute error of the two PC-based VA techniques with respect to the MC
analysis in logarithmic scale.
Fig. 7. Example A. Top: comparison between the standard deviation of the
magnitude of S12 obtained with the MC analysis (full black line), proposed
PC-based method (red dotted line), and technique [13] (green dashed line).
Bottom: absolute error of the two PC-based VA techniques with respect to
the MC analysis in logarithmic scale.
the reduced model and the original system over Ks = 50
frequency samples
Errorrms =
N2
p
i=1
Ks
k=1 wYi (sk)(Yr,i (sk) − Yi (sk))
2
N2
p Ks
(46)
with
wYi (s) = (Yi(s))−1
. (47)
The evaluation of the significant r singular values of the
SVD decomposition in (31) is performed assuming 0.0001
as a threshold. Upon calculation of the common projection
matrix (32), each of the K reduced-order models calculated for
the couple of random variables (L, S) has order 232. Finally,
the set of reduced-order matrices is modeled using a second-
order (P = 2) PC expansion, giving M = 5 for (L, S),
according to (3).
10. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
10 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
Fig. 8. Example A. PDF and CDF of the magnitude of S12 at 835 MHz. Full
black line: PDF computed using the novel technique. Dashed black line: CDF
computed using the novel technique. Circles (◦): PDF computed using the
MC technique. Squares ( ): CDF computed using the MC technique.
Fig. 9. Example A. Top: comparison between the value of the 3317th row
and 59th column of the projection operator calculated for all the initial length
samples (full black line) and corresponding value obtained using the PC model
of the projection operator (green dashed line). Bottom: same comparison for
the value of the 4200th row and 39th column of the projection operator.
The results of the VA obtained with the novel proposed
method are compared with the corresponding ones given
by the technique [13] and are validated by means of com-
parison with results of the MC analysis using the para-
meterized model-order reduction technique [31]. The latter
technique is a parameterized model-order reduction method
that guarantees the overall stability and passivity of para-
meterized reduced-order models, using passivity preserving
interpolation schemes. In particular, first a set of reduced
model with the same order are calculated assuming 0.0001
as threshold for the error measure (46). Then, the reduced
models obtained are interpolated over the 10 000 (L, S)
samples used for the MC analysis. Note that the parameter-
ized model-order reduction technique [31] is applied to the
same PEEC matrices calculated for the initial (L, S) samples
used for the novel proposed technique. Finally, the initial
Fig. 10. Example B. Geometry of the system of bended conductors.
Fig. 11. Example B. Top: comparison between the value of the 1130th
row and 31th column of the projection operator calculated for all the length
samples corresponding to the value of spacing S = 2 mm (full black line) and
corresponding value obtained using the PC model of the projection operator
(green dashed line). Bottom: same comparison for the value of the 2000th
row and 11th column of the projection operator.
Fig. 12. Example B. Comparison of the mean for the magnitude of Y24
with respect to the couple of random variables (L, S) obtained with the MC
analysis performed with the technique [31] (full black line), technique [13]
(green dashed line), and proposed PC-based method (dotted red line).
samples needed by the technique [13] are computed over a
Smolyak sparse grid, composed of 29 (L, S) samples [13],
whereas 0.0001 is assumed as maximum weighted rms error
11. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 11
Fig. 13. Example B. Comparison of the standard deviation for the magnitude
of Y24 with respect to the couple of random variables (L, S) obtained with
the MC analysis performed with the technique [31] (full black line), technique
[13] (green dashed line), and proposed PC-based method (dotted red line).
TABLE VII
EXAMPLE B. EFFICIENCY OF THE PROPOSED PC-BASED
TECHNIQUE FOR (L, S)
to estimate the common order of the projection operators
calculated for all the initial (L, S) samples.
The PC model of the projection operator computed by the
technique [13] is not accurate (Fig. 11). This leads to poor
accuracy in computing the system variability features with
respect to the corresponding results given by the novel pro-
posed method and the MC analysis performed using the para-
meterized MOR technique [31], as shown in Figs. 12 and 13,
although the number of initial samples used to implement
the technique [13] is almost the double of the ones used for
TABLE VIII
EXAMPLE B. MEMORY REQUIREMENTS OF THE PROPOSED PC-BASED
TECHNIQUE AND OF THE TECHNIQUE [13] FOR (L, S)
Fig. 14. Example B. Top: the variability of the magnitude of Y13 with respect
to the random variables (L, S, W) in the frequency range 100 kHz–5 GHz.
Bottom: variability of the magnitude of Y13 in the frequency range 100 kHz–
100 MHz. In both plots, the blue lines are the results of the MC simulations
and the red thick line corresponds to the nominal value for L0 = 5 mm,
S0 = 2 mm, and W0 = 0.5 mm.
the proposed PC-based method. In particular, Figs. 12 and 13
show the mean and standard deviation of the magnitude of Y24
calculated with respect to the random variables (L, S). Similar
results can be obtained for the other entries of the admittance
parameters. Furthermore, the VA performed with the proposed
method shows a great efficiency both in terms of memory and
computational time, as shown in Tables VII and VIII.
Finally, the VA of the system in Fig. 10 is performed with
respect to the set of random variables (L, S, W), varying in a
range of ±10% with respect to the central values previously
mentioned, in order to show the performances of the proposed
PC method while increasing the numbers of random variables.
Figure 14 shows an example of the system variability with
respect to the chosen random variables (L, S, W).
The system admittance parameters are evaluated using the
PEEC method over a grid of 3 × 3 × 3 (L, S, W) samples.
12. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
12 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
TABLE IX
EXAMPLE B. EFFICIENCY OF THE PROPOSED PC-BASED
TECHNIQUE FOR (L, S, W)
Fig. 15. Example B. Comparison of the mean for the magnitude of Y43
obtained with the MC analysis performed with the technique [31] using 10 000
(full black line), 1000 (green dashed line), and 100 (blue exes (×)) (L, S, W)
samples and the proposed PC-based method (dotted red line).
The evaluation of the common order of the reduced models and
of the r significant singular values of the SVD decomposition
in (31) is performed with the same setting used in the previous
case, giving reduced-order models of order 328. Next, the
set of reduced-order matrices is modeled using a second-
order (P = 2) PC expansion, giving M = 9 for (L, S, W),
according to (3). The proposed PC-based method is compared
with the MC analysis performed with the technique [31] for
100, 1000, and 10 000 samples.
The proposed method shows a great efficiency in comput-
ing the system variability features, as shown in Table IX
that compares the computational cost of the MC analysis
Fig. 16. Example B. Comparison of the standard deviation for the magnitude
of Y43 obtained with the MC analysis performed with the technique [31] using
10000 (full black line), 1000 (green dashed line), and 100 (blue exes (×))
(L, S, W) samples and the proposed PC-based method (dotted red line).
Fig. 17. Example B. PDF and CDF of the magnitude of Y11 at 100 kHz. Full
black line: PDF computed using the novel technique. Dashed black line: CDF
computed using the novel technique. Circles (◦): PDF computed using the MC
technique performed with the technique [31]. Squares ( ): CDF computed
using the MC technique performed with the technique [31].
performed with the technique [31] for 10 000 samples and
of the novel proposed method, and a great accuracy with
respect to the corresponding analysis performed with the
technique [31], as described by Figs. 15–17. In particular,
Figs. 15 and 16 show the mean and standard deviation of
the magnitude of Y43 obtained with the proposed PC-based
method and the MC analysis performed using different set of
samples. Figure 17 describes the PDF and the cumulative dis-
tribution function (CDF) of the magnitude of Y11 at 100 kHz
calculated with respect to the random variables (L, S, W)
using 10 000 samples for the MC results. Similar results can
be obtained for the other entries of the admittance parameters.
V. CONCLUSION
In this paper, a novel and efficient technique for stochas-
tic MOR of general linear multiport systems is presented.
The core of the proposed technique is the application of
13. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SPINA et al.: EFFICIENT VA OF ELECTROMAGNETIC SYSTEMS 13
the PC expansion to a set of reduced-order system in a
descriptor state-space form obtained through a MOR step. In
addition to its accuracy, the proposed approach offers a great
flexibility. Indeed, not only the novel method can be applied
to systems whose frequency-domain transfer function can be
expressed in different forms (e.g., scattering or admittance
parameters), but also different MOR techniques can be used
for the calculation of the reduced-order system in descriptor
state-space form. Two distinct pertinent numerical examples
validate the accuracy and efficiency of the proposed method
with respect to existing techniques concerning the calculation
of system variability features.
REFERENCES
[1] R. F. Harrington, Field Computation by Moment Methods. New York,
NY, USA: Macmillan, 1968.
[2] A. E. Ruehli, “Equivalent circuit models for three dimensional multi-
conductor systems,” IEEE Trans. Microw. Theory Tech., vol. 22, no. 3,
pp. 216–221, Mar. 1974.
[3] J. M. Jin, The Finite Element Method in Electromagnetics, 2nd ed.
New York, NY, USA: Wiley, 2002.
[4] D. Xiu and G. M. Karniadakis, “The Wiener-Askey polynomial chaos
for stochastic differential equations,” SIAM J. Sci. Comput., vol. 24,
no. 2, pp. 619–644, Apr. 2002.
[5] C. Soize and R. Ghanem, “Physical systems with random uncertainties:
Chaos representations with arbitrary probability measure,” SIAM J. Sci.
Comput., vol. 26, no. 2, pp. 395–410, Jul. 2004.
[6] J. A. S. Witteveen and H. Bijl, “Modeling arbitrary uncertainties using
Gram-Schmidt Polynomial Chaos,” in Proc. 44th AIAA Aerosp. Sci.
Meeting Exhibit., Jan. 2006, pp. 1–17.
[7] M. S. Eldred, “Recent advance in non-intrusive polynomial-chaos and
stochastic collocation methods for uncertainty analysis and design,” in
Proc. 50th AIAA/ASME/ASCE/AHS/ASC Struct., Struct. Dyn., Mater.
Conf., Palm Springs, CA, USA, May 2009, pp. 1–37.
[8] G. Blatman and B. Sudret, “An adaptive algorithm to build up sparse
polynomial chaos expansions for stochastic finite element analysis,”
Probab. Eng. Mech., vol. 25, no. 2, pp. 183–197, 2010.
[9] S. Vrudhula, J. M. Wang, and P. Ghanta, “Hermite polynomial based
interconnect analysis in the presence of process variations,” IEEE
Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 25, no. 10,
pp. 2001–2011, Oct. 2006.
[10] N. Mi, S. X.-D. Tan, Y. Cai, and X. Hong, “Fast variational analysis of
on-chip power grids by stochastic extended Krylov subspace method,”
IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 27, no. 11,
pp. 1996–2006, Nov. 2008.
[11] P. Sumant, H. Wu, A. Cangellaris, and N. Aluru, “A sparse grid
based collocation method for model order reduction of finite element
approximations of passive electromagnetic devices under uncertainty,”
in Proc. IEEE MTT-S Int. Microw. Symp. Dig., Anaheim, CA, USA,
May 2010, pp. 1652–1655.
[12] P. Sumant, H. Wu, A. Cangellaris, and N. Aluru, “Order reduction
of finite element models of passive electromagnetic structures with
statistical variability,” in Proc. URSI Int. Symp. Electromagn. Theory,
Aug. 2010, pp. 688–691.
[13] P. Sumant, H. Wu, A. Cangellaris, and N. Aluru, “Reduced-order models
of finite element approximations of electromagnetic devices exhibiting
statistical variability,” IEEE Trans. Antennas Propag., vol. 60, no. 1,
pp. 301–309, Jan. 2012.
[14] L. Knockaert, T. Dhaene, F. Ferranti, and D. De Zutter, “Model order
reduction with preservation of passivity, non-expansivity and Markov
moments,” Syst. Control Lett., vol. 60, no. 1, pp. 53–61, Jan. 2011.
[15] J. Phillips, L. Daniel, and L. M. Silveira, “Guaranteed passive balancing
transformations for model order reduction,” IEEE Trans. CAD Integr.
Circuits Syst., vol. 22, no. 8, pp. 1027–1041, Aug. 2003.
[16] J. Phillips and L. M. Silveira, “Poor man’s TBR: A simple model
reduction scheme,” IEEE Trans. CAD Integr. Circuits Syst., vol. 24, no. 1,
pp. 43–55, Jan. 2005.
[17] V. Barthelmann, E. Novak, and K. Ritter, “High dimensional polynomial
interpolation on sparse grid,” Adv. Comput. Math., vol. 12, no. 4,
pp. 273–288, Mar. 2000.
[18] E. Novak and K. Ritter, “High dimensional integration of smooth func-
tions over cubes,” Numer. Math., vol. 75, no. 1, pp. 79–97, Nov. 1996.
[19] E. Novak and K. Ritter, “Simple cubature formulas with high polynomial
exactness,” Construct. Approximation, vol. 15, no. 4, pp. 499–522,
Oct. 1999.
[20] E. R. Samuel, F. Ferranti, L. Knockaert, and T. Dhaene, “Parameterized
reduced order models with guaranteed passivity using matrix interpola-
tion,” in Proc. IEEE 16th Workshop SPI, May 2012, pp. 65–68.
[21] A. Der Kiureghian and P. L. Liu, “Structural reliability under incom-
plete probability information,” J. Eng. Mech., ASCE, vol. 112, no. 1,
pp. 85–104, Jan. 1986.
[22] M. Loéve, Probability Theory, 4th ed. Berlin, Germany: Springer-Verlag,
1977.
[23] A. Papoulis, Probability, Random Variables and Stochastic Processes.
New York, NY, USA: McGraw-Hill, 1991.
[24] D. Spina, F. Ferranti, T. Dhaene, L. Knockaert, G. Antonini, and
D. Vande Ginste, “Variability analysis of multiport systems via
polynomial-chaos expansion,” IEEE Trans. Microw. Theory Tech.,
vol. 60, no. 8, pp. 2329–2338, Aug. 2012.
[25] D. Vande Ginste, D. De Zutter, D. Deschrijver, T. Dhaene, P. Manfredi,
and F. Canavero, “Stochastic modeling-based variability analysis of on-
chip interconnects,” IEEE Trans. Compon., Packag., Manuf. Technol.,
vol. 2, no. 7, pp. 1182–1192, Jul. 2012.
[26] I. S. Stievano, P. Manfredi, and F. G. Canavero, “Parameters vari-
ability effects on multiconductor interconnects via Hermite polynomial
chaos,” IEEE Trans. Compon., Packag., Manuf. Technol., vol. 1, no. 8,
pp. 1234–1239, Aug. 2011.
[27] L. Knockaert and D. De Zutter, “Laguerre-SVD reduced order model-
ing,” IEEE Trans. Microw. Theory Tech., vol. 48, no. 9, pp. 1469–1475,
Sep. 2000.
[28] A. Odabasioglu, M. Celik, and L. T. Pileggi, “PRIMA: Passive reduced-
order interconnect macromodeling algorithm,” IEEE Trans. Comput.-
Aided Des., vol. 17, no. 8, pp. 645–654, Aug. 1998.
[29] G. Antonini, “A dyadic Green’s function based method for the transient
analysis of lossy and dispersive multiconductor transmission lines,”
IEEE Trans. Microw. Theory Tech., vol. 56, no. 4, pp. 880–895,
Apr. 2008.
[30] F. Ferranti, G. Antonini, T. Dhaene, and L. Knockaert, “Guaranteed
passive parameterized model order reduction of the partial element
equivalent circuit (PEEC) method,” IEEE Trans. Electromagn. Compat.,
vol. 52, no. 4, pp. 974–984, Nov. 2010.
[31] F. Ferranti, G. Antonini, T. Dhaene, L. Knockaert, and A. E. Ruehli,
“Physics-based passivity-preserving parameterized model order reduc-
tion for PEEC circuit analysis,” IEEE Trans. Compon., Packag., Manuf.
Technol., vol. 1, no. 3, pp. 399–409, Mar. 2011.
Domenico Spina received the M.S. degree (summa
cum laude) in electronic engineering from Università
degli Studi dell’Aquila, L’Aquila, Italy, in 2010.
He is currently pursuing the Ph.D. degree with
the Department of Information Technology, Ghent
University, Ghent, Belgium.
His current research interests include modeling
and simulation, system identification, microwave
engineering, sensitivity and uncertainty analysis.
Francesco Ferranti (M’10) received the B.S. degree
(summa cum laude) in electronic engineering from
Università degli Studi di Palermo, Palermo, Italy,
the M.S. degree (summa cum laude with honors)
in electronic engineering from Università degli
Studi dell’Aquila, L’Aquila, Italy, and the Ph.D.
degree in electrical engineering from Ghent
University, Ghent, Belgium, in 2005, 2007, and
2011, respectively.
He is currently a Post-Doctoral Research
Fellow of the Research Foundation Flanders
(FWO-Vlaanderen) with the Department of Information Technology, Ghent
University. His current research interests include parametric macromodeling,
parameterized model order reduction, signal integrity, electromagnetic
compatibility, numerical modeling, and system identification.
14. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
14 IEEE TRANSACTIONS ON COMPONENTS, PACKAGING AND MANUFACTURING TECHNOLOGY
Giulio Antonini (M’94–SM’05) received the Laurea
(summa cum laude) and Ph.D. degrees in electrical
engineering from Università degli Studi dell’Aquila,
L’Aquila, Italy, and the University of Rome La
Sapienza, Rome, Italy, in 1994 and 1998, respec-
tively.
He has been with the Department of Electrical
Engineering, UAq EMC Laboratory, University of
L’Aquila, L’Aquila, since 1998, where he is cur-
rently a Full Professor. He has given keynote lectures
and chaired several special sessions at international
conferences.
Tom Dhaene (M’00–SM’05) was born in Deinze,
Belgium, in 1966. He received the Ph.D. degree in
electrotechnical engineering from Ghent University,
Ghent, Belgium, in 1993.
He was a Research Assistant with the Department
of Information Technology, Ghent University, Ghent,
Belgium, from 1989 to 1993, where his research
focused on different aspects of full-wave electro-
magnetic circuit modeling, transient simulation, and
time-domain characterization of high-frequency and
high-speed interconnections. In 1993, he joined the
Electronic Design Automation Company Alphabit (now part of Agilent), Santa
Clara, CA, USA. He was one of the key developers of the planar EM simulator
ADS Momentum. In 2000, he joined the Department of Mathematics and
Computer Science, University of Antwerp, Antwerp, Belgium, as a Professor.
Since 2007, he has been a Full Professor with the Department of Informa-
tion Technology, Ghent University, Ghent, Belgium. He has authored and
co-authored more than 250 peer-reviewed papers and abstracts in international
conference proceedings, journals, and books. He holds five U.S. patents.
Luc Knockaert (M’81–SM’00) received the M.Sc.
degrees in physical engineering and telecommunica-
tions engineering, and the Ph.D. degree in electrical
engineering from Ghent University, Ghent, Belgium,
in 1974, 1977, and 1987, respectively.
He was working in the North-South Cooperation
and in development projects with the Universities
of the Democratic Republic of Congo and Burundi
from 1979 to 1984 and from 1988 to 1995. He
is currently with the Interdisciplinary Institute for
BroadBand Technologies and a Professor with the
Department of Information Technology, Ghent University, Ghent, Belgium. He
has authored and co-authored more than 100 papers published in international
journals and conference proceedings. His current research interests include the
application of linear algebra and adaptive methods in signal estimation, model
order reduction, and computational electromagnetics.
Prof. Knockaert is a member of the Mathematical Association of America
and the Society for Industrial and Applied Mathematics.