call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Modified Procedure to Solve Fuzzy Transshipment Problem by using Trapezoidal ...inventionjournals
This paper deals with the large scale transshipment problem in Fuzzy Environment. Here we determine the efficient solutions for the large scale Fuzzy transshipment problem. Vogel’s approximation method (VAM) is a technique for finding the good initial feasible solution to allocation problem. Here Vogel’s Approximation Method (VAM) is used to find the efficient initial solution for the large scale transshipment problem.
Multi – Objective Two Stage Fuzzy Transportation Problem with Hexagonal Fuzzy...IJERA Editor
Fuzzy geometric programming approach is used to determine the optimal solution of a multi-objective two stage fuzzy transportation problem in which supplies, demands are hexagonal fuzzy numbers and fuzzy membership of the objective function is defined. This paper aims to find out the best compromise solution among the set of feasible solutions for the multi-objective two stage transportation problem. To illustrate the proposed method, example is used
The document presents a decomposition method for solving indefinite quadratic programming problems with n variables and m linear constraints. The method decomposes the original problem into at most m subproblems, each with dimension n-1 and m linear constraints. All global minima, isolated local minima, and some non-isolated local minima of the original problem can be obtained by combining the solutions of the subproblems. The subproblems can then be further decomposed into smaller subproblems until 1-dimensional subproblems are reached, which can be solved directly.
The document discusses greedy algorithms and the knapsack problem. It defines greedy algorithms as making locally optimal choices at each step to find a global optimum. The general method is described as starting with a small solution and building up, making short-sighted choices. Knapsack problems aim to fill a knapsack of size S with items of varying sizes and values, often choosing items with the highest value-to-size ratios first. The fractional knapsack problem allows items to be partially selected to maximize total value within the size limit.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
This document presents a systematic approach for solving mixed intuitionistic fuzzy transportation problems. It begins with definitions of fuzzy sets, intuitionistic fuzzy sets, and triangular intuitionistic fuzzy numbers. It then formulates an intuitionistic fuzzy transportation problem and proposes a mixed intuitionistic fuzzy zero point method to find the optimal solution in terms of triangular intuitionistic fuzzy numbers. Finally, it provides the computational procedure and illustrates the method with a numerical example.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
The document discusses algorithms and data structures using divide and conquer and greedy approaches. It covers topics like matrix multiplication, convex hull, binary search, activity selection problem, knapsack problem, and their algorithms and time complexities. Examples are provided for convex hull, binary search, activity selection, and knapsack problem algorithms. The document is intended as teaching material on design and analysis of algorithms.
Modified Procedure to Solve Fuzzy Transshipment Problem by using Trapezoidal ...inventionjournals
This paper deals with the large scale transshipment problem in Fuzzy Environment. Here we determine the efficient solutions for the large scale Fuzzy transshipment problem. Vogel’s approximation method (VAM) is a technique for finding the good initial feasible solution to allocation problem. Here Vogel’s Approximation Method (VAM) is used to find the efficient initial solution for the large scale transshipment problem.
Multi – Objective Two Stage Fuzzy Transportation Problem with Hexagonal Fuzzy...IJERA Editor
Fuzzy geometric programming approach is used to determine the optimal solution of a multi-objective two stage fuzzy transportation problem in which supplies, demands are hexagonal fuzzy numbers and fuzzy membership of the objective function is defined. This paper aims to find out the best compromise solution among the set of feasible solutions for the multi-objective two stage transportation problem. To illustrate the proposed method, example is used
The document presents a decomposition method for solving indefinite quadratic programming problems with n variables and m linear constraints. The method decomposes the original problem into at most m subproblems, each with dimension n-1 and m linear constraints. All global minima, isolated local minima, and some non-isolated local minima of the original problem can be obtained by combining the solutions of the subproblems. The subproblems can then be further decomposed into smaller subproblems until 1-dimensional subproblems are reached, which can be solved directly.
The document discusses greedy algorithms and the knapsack problem. It defines greedy algorithms as making locally optimal choices at each step to find a global optimum. The general method is described as starting with a small solution and building up, making short-sighted choices. Knapsack problems aim to fill a knapsack of size S with items of varying sizes and values, often choosing items with the highest value-to-size ratios first. The fractional knapsack problem allows items to be partially selected to maximize total value within the size limit.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
This document presents a systematic approach for solving mixed intuitionistic fuzzy transportation problems. It begins with definitions of fuzzy sets, intuitionistic fuzzy sets, and triangular intuitionistic fuzzy numbers. It then formulates an intuitionistic fuzzy transportation problem and proposes a mixed intuitionistic fuzzy zero point method to find the optimal solution in terms of triangular intuitionistic fuzzy numbers. Finally, it provides the computational procedure and illustrates the method with a numerical example.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
The document discusses algorithms and data structures using divide and conquer and greedy approaches. It covers topics like matrix multiplication, convex hull, binary search, activity selection problem, knapsack problem, and their algorithms and time complexities. Examples are provided for convex hull, binary search, activity selection, and knapsack problem algorithms. The document is intended as teaching material on design and analysis of algorithms.
This document presents a two-stage method for solving fuzzy transportation problems where the costs, supplies, and demands are represented by symmetric trapezoidal fuzzy numbers. In the first stage, the problem is solved to satisfy minimum demand requirements. Remaining supplies are then distributed in the second stage to further minimize costs. A numerical example demonstrates using robust ranking techniques to convert the fuzzy problem into a crisp one, which is then solved using a zero suffix method. The total optimal costs from both stages provide the solution to the original fuzzy transportation problem.
I am Stacy L. I am a Matlab Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, University of Houston. I have been helping students with their homework for the past 9 years. I solve assignments related to Data Analysis.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Data Analysis Assignments.
This document describes a quadratic assignment problem (QAP) involving assigning 358 constraints and 50 variables. It provides an example of a QAP with 3 facilities and 3 locations. The QAP aims to assign facilities to locations in a way that minimizes total cost, which is a function of the flow between facilities and the distance between locations. Several applications of QAP are discussed, including facility location, scheduling, and ergonomic design problems.
1. Hash tables are good for random access of elements but not sequential access. When records need to be accessed sequentially, hashing can be problematic because elements are stored in random locations instead of consecutively.
2. To find the successor of a node in a binary search tree, we take the right child. This operation has a runtime complexity of O(1).
3. When comparing operations like insertion, deletion, and searching between different data structures, arrays generally have the best performance for insertion and searching, while linked lists have better performance for deletion and allow for easy insertion/deletion anywhere. Binary search trees fall between these two.
The document discusses greedy algorithms and their properties. It describes how greedy algorithms work by making locally optimal choices at each step in the hope of reaching a globally optimal solution. Two examples are given: the activity selection problem and finding minimum spanning trees. Prim's algorithm for finding minimum spanning trees is described in detail, showing how it works by always selecting the lightest edge between the growing tree and remaining vertices.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
The document provides a course calendar for a class on Bayesian estimation methods. It lists the dates and topics to be covered over 15 class periods from September to January. The topics progress from basic concepts like Bayes estimation and the Kalman filter, to more modern methods like particle filters, hidden Markov models, Bayesian decision theory, and applications of principal component analysis and independent component analysis. One class is noted as having no class.
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
I am Irene M. I am a Diffusion Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from California, USA.
I have been helping students with their homework for the past 8 years. I solve assignments related to Diffusion. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Diffusion Assignments.
Comparitive Analysis of Algorithm strategiesTalha Shaikh
The document discusses various algorithm strategies including decrease and conquer, greedy approach, backtracking, and transform and conquer. It provides definitions, examples, advantages and disadvantages for each strategy. Decrease and conquer algorithms like insertion sort reduce the problem size at each step. Greedy algorithms make locally optimal choices at each step. Backtracking algorithms systematically explore all solutions using a depth first search approach.
Independent Component Analysis (ICA) is a statistical technique used to separate mixed signals into their independent source components. ICA assumes that the observed mixed data was generated by mixing together statistically independent source signals. ICA uses an "unmixing" matrix to separate the mixed signals by maximizing the statistical independence of the estimated components, with the goal of recovering the original independent source signals. ICA models the probability distribution of each independent source signal using the sigmoid function, and then iteratively updates the unmixing matrix weights to maximize the overall likelihood of the data, until convergence is reached. However, ICA has limitations in that the original source signal order and scaling cannot be determined if the source signals are Gaussian distributed.
This assignment discusses two algorithm design techniques - dynamic programming and decrease-and-conquer. It provides questions to design algorithms using these techniques for problems like rod cutting, shortest paths on a chessboard, insertion sort, and checking graph connectivity. Students must submit the assignment by the given deadline or face point deductions, and no presentations will be held after a certain date.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...Masahiro Suzuki
This document discusses techniques for training deep variational autoencoders and probabilistic ladder networks. It proposes three advances: 1) Using an inference model similar to ladder networks with multiple stochastic layers, 2) Adding a warm-up period to keep units active early in training, and 3) Using batch normalization. These advances allow training models with up to five stochastic layers and achieve state-of-the-art log-likelihood results on benchmark datasets. The document explains variational autoencoders, probabilistic ladder networks, and how the proposed techniques parameterize the generative and inference models.
The document discusses greedy algorithms and how they can be applied to solve optimization problems like the knapsack problem, activity selection problem, and job sequencing problem. It provides examples of greedy algorithms for each problem. A greedy algorithm works by making locally optimal choices at each step in the hope of finding a globally optimal solution. For the knapsack problem, the greedy approach sorts items by profit/weight ratio and fills the knapsack accordingly. For activity selection, it schedules activities based on earliest finish time. For job sequencing, it sorts jobs by profit and schedules the most profitable jobs that meet deadlines.
A Correlative Information-Theoretic Measure for Image SimilarityFarah M. Altufaili
A hybrid measure is proposed for assessing the similarity among gray-scale images. The well-known Structural Similarity Index Measure (SSIM) has been designed using a statistical approach that fails under significant noise (low PSNR). The proposed measure, denoted by SjhCorr2, uses a combination of two parts: the first part is information - theoretic, while the second part is based on 2D correlation. The concept
of symmetric joint histogram is used in the information - heoretic part. The new measure shows the advantages of statistical approaches and information - theoretic approaches. The proposed similarity approach is robust under noise. The new measure outperforms the classical SSIM in detecting image similarity at low PSN.
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problempaperpublications3
Abstract: In this work, the fuzzy nonlinear programming problem (FNLPP) has been developed and their result have also discussed. The numerical solutions of crisp problems and have been compared and the fuzzy solution and its effectiveness have also been presented and discussed. The penalty function method has been developed and mixed with Nelder and Mend’s algorithm of direct optimization problem solutionhave been used together to solve this FNLPP.
Keyword:Fuzzy set theory, fuzzy numbers, decision making, nonlinear programming, Nelder and Mend’s algorithm, penalty function method.
This document discusses using principal component analysis (PCA) and the discrete cosine transform (DCT) to recognize images from a database. It explains how bitmaps store image data, DCT compacts image energy, and PCA reduces dimensionality by finding the principal components via eigenvectors of the covariance matrix. An algorithm is proposed that uses DCT, PCA on a 3x3 block, characteristic equations to find the maximum eigenvector, and least mean square comparison to recognize queries against the database images.
The document discusses greedy algorithms and their applications. It provides examples of problems that greedy algorithms can solve optimally, such as the change making problem and finding minimum spanning trees (MSTs). It also discusses problems where greedy algorithms provide approximations rather than optimal solutions, such as the traveling salesman problem. The document describes Prim's and Kruskal's algorithms for finding MSTs and Dijkstra's algorithm for solving single-source shortest path problems. It explains how these algorithms make locally optimal choices at each step in a greedy manner to build up global solutions.
Principal components analysis (PCA) is an algorithm that identifies the subspace where data approximately lies in a way that requires only an eigenvector calculation. PCA works by finding the directions, called principal components, that maximize the variance of the projected data points. It does this by computing the eigenvectors of the covariance matrix of the data, with the top eigenvectors providing the principal components that best explain the variability in the data using a reduced dimensional space. PCA has applications in data compression, dimensionality reduction, noise reduction, and data visualization.
This document proposes using machine learning techniques to predict COVID-19 infections based on chest x-ray images. Specifically, it involves using discrete wavelet transform to extract space-frequency features from chest x-rays, reducing the dimensionality of features using Shannon entropy, and then training standard machine learning classifiers like logistic regression, support vector machine, decision tree, and convolutional neural network on the extracted features to classify images as COVID-19 positive or negative. The document provides background on the proposed techniques of discrete wavelet transform, entropy, and various machine learning models.
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...ijsrd.com
The aim of this paper is to investigate the multiple attribute decision making problems with intuitionistic fuzzy information, in which the information about attribute weights are incompletely known, and the attribute values take the form of intuitionistic fuzzy numbers. In order to get the weight vector of the attribute, we establish an optimization model based on the basic ideal of traditional gray relational analysis (GRA) method, by which the attribute weights can be determined. For the special situations where the information about attribute weights are completely unknown, we establish another optimization model. By solving this model, we get a simple and exact formula, which can be used to determine the attribute weights. Then, based on the traditional GRA method, calculation steps for solving an interval-valued intuitionistic fuzzy environment and developed modified GRA method for interval-valued intuitionistic fuzzy multiple attributes decision-making with incompletely known attribute weight information. This paper provides a new method for sensitivity analysis of MADM problems so that by sing it and changing the weights of attributes, one can determine changes in the final results for a decision making problem. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.
This document presents a two-stage method for solving fuzzy transportation problems where the costs, supplies, and demands are represented by symmetric trapezoidal fuzzy numbers. In the first stage, the problem is solved to satisfy minimum demand requirements. Remaining supplies are then distributed in the second stage to further minimize costs. A numerical example demonstrates using robust ranking techniques to convert the fuzzy problem into a crisp one, which is then solved using a zero suffix method. The total optimal costs from both stages provide the solution to the original fuzzy transportation problem.
I am Stacy L. I am a Matlab Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, University of Houston. I have been helping students with their homework for the past 9 years. I solve assignments related to Data Analysis.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Data Analysis Assignments.
This document describes a quadratic assignment problem (QAP) involving assigning 358 constraints and 50 variables. It provides an example of a QAP with 3 facilities and 3 locations. The QAP aims to assign facilities to locations in a way that minimizes total cost, which is a function of the flow between facilities and the distance between locations. Several applications of QAP are discussed, including facility location, scheduling, and ergonomic design problems.
1. Hash tables are good for random access of elements but not sequential access. When records need to be accessed sequentially, hashing can be problematic because elements are stored in random locations instead of consecutively.
2. To find the successor of a node in a binary search tree, we take the right child. This operation has a runtime complexity of O(1).
3. When comparing operations like insertion, deletion, and searching between different data structures, arrays generally have the best performance for insertion and searching, while linked lists have better performance for deletion and allow for easy insertion/deletion anywhere. Binary search trees fall between these two.
The document discusses greedy algorithms and their properties. It describes how greedy algorithms work by making locally optimal choices at each step in the hope of reaching a globally optimal solution. Two examples are given: the activity selection problem and finding minimum spanning trees. Prim's algorithm for finding minimum spanning trees is described in detail, showing how it works by always selecting the lightest edge between the growing tree and remaining vertices.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
The document provides a course calendar for a class on Bayesian estimation methods. It lists the dates and topics to be covered over 15 class periods from September to January. The topics progress from basic concepts like Bayes estimation and the Kalman filter, to more modern methods like particle filters, hidden Markov models, Bayesian decision theory, and applications of principal component analysis and independent component analysis. One class is noted as having no class.
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
- The document describes using graphical lasso to estimate the precision matrix of stock returns and apply portfolio optimization.
- Graphical lasso estimates the precision matrix instead of the covariance matrix to allow for sparsity. This makes the estimation more efficient for large datasets.
- The study uses 8 different models to simulate stock return data and compares the performance of graphical lasso, sample covariance, and shrinkage estimators on portfolio optimization of in-sample and out-of-sample test data. Graphical lasso performed best on out-of-sample test data, showing it can generate portfolios that generalize well.
I am Irene M. I am a Diffusion Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from California, USA.
I have been helping students with their homework for the past 8 years. I solve assignments related to Diffusion. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Diffusion Assignments.
Comparitive Analysis of Algorithm strategiesTalha Shaikh
The document discusses various algorithm strategies including decrease and conquer, greedy approach, backtracking, and transform and conquer. It provides definitions, examples, advantages and disadvantages for each strategy. Decrease and conquer algorithms like insertion sort reduce the problem size at each step. Greedy algorithms make locally optimal choices at each step. Backtracking algorithms systematically explore all solutions using a depth first search approach.
Independent Component Analysis (ICA) is a statistical technique used to separate mixed signals into their independent source components. ICA assumes that the observed mixed data was generated by mixing together statistically independent source signals. ICA uses an "unmixing" matrix to separate the mixed signals by maximizing the statistical independence of the estimated components, with the goal of recovering the original independent source signals. ICA models the probability distribution of each independent source signal using the sigmoid function, and then iteratively updates the unmixing matrix weights to maximize the overall likelihood of the data, until convergence is reached. However, ICA has limitations in that the original source signal order and scaling cannot be determined if the source signals are Gaussian distributed.
This assignment discusses two algorithm design techniques - dynamic programming and decrease-and-conquer. It provides questions to design algorithms using these techniques for problems like rod cutting, shortest paths on a chessboard, insertion sort, and checking graph connectivity. Students must submit the assignment by the given deadline or face point deductions, and no presentations will be held after a certain date.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...Masahiro Suzuki
This document discusses techniques for training deep variational autoencoders and probabilistic ladder networks. It proposes three advances: 1) Using an inference model similar to ladder networks with multiple stochastic layers, 2) Adding a warm-up period to keep units active early in training, and 3) Using batch normalization. These advances allow training models with up to five stochastic layers and achieve state-of-the-art log-likelihood results on benchmark datasets. The document explains variational autoencoders, probabilistic ladder networks, and how the proposed techniques parameterize the generative and inference models.
The document discusses greedy algorithms and how they can be applied to solve optimization problems like the knapsack problem, activity selection problem, and job sequencing problem. It provides examples of greedy algorithms for each problem. A greedy algorithm works by making locally optimal choices at each step in the hope of finding a globally optimal solution. For the knapsack problem, the greedy approach sorts items by profit/weight ratio and fills the knapsack accordingly. For activity selection, it schedules activities based on earliest finish time. For job sequencing, it sorts jobs by profit and schedules the most profitable jobs that meet deadlines.
A Correlative Information-Theoretic Measure for Image SimilarityFarah M. Altufaili
A hybrid measure is proposed for assessing the similarity among gray-scale images. The well-known Structural Similarity Index Measure (SSIM) has been designed using a statistical approach that fails under significant noise (low PSNR). The proposed measure, denoted by SjhCorr2, uses a combination of two parts: the first part is information - theoretic, while the second part is based on 2D correlation. The concept
of symmetric joint histogram is used in the information - heoretic part. The new measure shows the advantages of statistical approaches and information - theoretic approaches. The proposed similarity approach is robust under noise. The new measure outperforms the classical SSIM in detecting image similarity at low PSN.
Penalty Function Method For Solving Fuzzy Nonlinear Programming Problempaperpublications3
Abstract: In this work, the fuzzy nonlinear programming problem (FNLPP) has been developed and their result have also discussed. The numerical solutions of crisp problems and have been compared and the fuzzy solution and its effectiveness have also been presented and discussed. The penalty function method has been developed and mixed with Nelder and Mend’s algorithm of direct optimization problem solutionhave been used together to solve this FNLPP.
Keyword:Fuzzy set theory, fuzzy numbers, decision making, nonlinear programming, Nelder and Mend’s algorithm, penalty function method.
This document discusses using principal component analysis (PCA) and the discrete cosine transform (DCT) to recognize images from a database. It explains how bitmaps store image data, DCT compacts image energy, and PCA reduces dimensionality by finding the principal components via eigenvectors of the covariance matrix. An algorithm is proposed that uses DCT, PCA on a 3x3 block, characteristic equations to find the maximum eigenvector, and least mean square comparison to recognize queries against the database images.
The document discusses greedy algorithms and their applications. It provides examples of problems that greedy algorithms can solve optimally, such as the change making problem and finding minimum spanning trees (MSTs). It also discusses problems where greedy algorithms provide approximations rather than optimal solutions, such as the traveling salesman problem. The document describes Prim's and Kruskal's algorithms for finding MSTs and Dijkstra's algorithm for solving single-source shortest path problems. It explains how these algorithms make locally optimal choices at each step in a greedy manner to build up global solutions.
Principal components analysis (PCA) is an algorithm that identifies the subspace where data approximately lies in a way that requires only an eigenvector calculation. PCA works by finding the directions, called principal components, that maximize the variance of the projected data points. It does this by computing the eigenvectors of the covariance matrix of the data, with the top eigenvectors providing the principal components that best explain the variability in the data using a reduced dimensional space. PCA has applications in data compression, dimensionality reduction, noise reduction, and data visualization.
This document proposes using machine learning techniques to predict COVID-19 infections based on chest x-ray images. Specifically, it involves using discrete wavelet transform to extract space-frequency features from chest x-rays, reducing the dimensionality of features using Shannon entropy, and then training standard machine learning classifiers like logistic regression, support vector machine, decision tree, and convolutional neural network on the extracted features to classify images as COVID-19 positive or negative. The document provides background on the proposed techniques of discrete wavelet transform, entropy, and various machine learning models.
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...ijsrd.com
The aim of this paper is to investigate the multiple attribute decision making problems with intuitionistic fuzzy information, in which the information about attribute weights are incompletely known, and the attribute values take the form of intuitionistic fuzzy numbers. In order to get the weight vector of the attribute, we establish an optimization model based on the basic ideal of traditional gray relational analysis (GRA) method, by which the attribute weights can be determined. For the special situations where the information about attribute weights are completely unknown, we establish another optimization model. By solving this model, we get a simple and exact formula, which can be used to determine the attribute weights. Then, based on the traditional GRA method, calculation steps for solving an interval-valued intuitionistic fuzzy environment and developed modified GRA method for interval-valued intuitionistic fuzzy multiple attributes decision-making with incompletely known attribute weight information. This paper provides a new method for sensitivity analysis of MADM problems so that by sing it and changing the weights of attributes, one can determine changes in the final results for a decision making problem. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.
PCA is an algorithm that identifies the subspace where data approximately lies in a way that directly computes the principal eigenvectors of the data's covariance matrix. This identifies directions that maximize the variance of data projections. PCA can be used for dimensionality reduction by projecting high-dimensional data onto the top k principal components/eigenvectors to obtain a lower-dimensional representation. It has applications in data compression, visualization, preprocessing data for machine learning, and filtering noise.
Propagation of Error Bounds due to Active Subspace ReductionMohammad
This document summarizes the propagation of error bounds due to active subspace reduction in computational models. It presents two algorithms for performing active subspace reduction: one that is gradient-free and reduces the response or state space, and one that is gradient-based and reduces the parameter space. It then develops a theorem for propagating error bounds across multiple reductions, both in the parameter and response spaces. Numerical experiments on an analytic function and a nuclear reactor pin cell model are used to validate the error bound approach.
The document reports on the results of three image processing projects. The first project implemented Lloyd-Max quantization to reduce image file sizes and Retinex theory to compensate for uneven illumination. The second project used principal component analysis to compute eigenfaces for face recognition. The third project performed linear discriminant analysis and tensor-based linear discriminant analysis for binary classification and visual object recognition. Illumination compensation subtracted an estimated illumination plane from image intensities to reduce shadows. Eigenfaces were the principal components of a training set of face images. Tensor-based linear discriminant analysis treated images as higher-order tensors to outperform conventional LDA.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of
decision problems under uncertainty. The exact approaches for computing decision based on possibilistic
networks are limited by the size of the possibility distributions. Generally, these approaches are based on
possibilistic propagation algorithms. An important step in the computation of the decision is the
transformation of the DAG (Direct Acyclic Graph) into a secondary structure, known as the junction trees
(JT). This transformation is known to be costly and represents a difficult problem. We propose in this paper
a new approximate approach for the computation of decision under uncertainty within possibilistic
networks. The computing of the optimal optimistic decision no longer goes through the junction tree
construction step. Instead, it is performed by calculating the degree of normalization in the moral graph
resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying
its preferences.
This document discusses univariate and multivariate regressions. It explains how to measure the accuracy of regression coefficient estimates using standard error, confidence intervals, and hypothesis testing. It then covers multiple linear regression, using methods like forward, backward, and mixed selection to identify important predictor variables. The document concludes by discussing model fitting metrics like RSE and R2, and factors to consider when making predictions with regression models.
A New Method To Solve Interval Transportation ProblemsSara Alvarez
This document summarizes a research paper that proposes a new method for solving interval transportation problems (ITPs), where the costs, supplies, and demands are represented by intervals rather than precise values. The method transforms the ITP into an equivalent bi-objective problem that minimizes the left limit and width of the interval cost function. Fuzzy programming is then used to obtain the solution to the bi-objective problem. Twenty test problems are solved using the proposed method and compared to an existing method, showing the new approach provides better solutions for 11 of the problems.
Numerical Solutions of Burgers' Equation Project ReportShikhar Agarwal
This report summarizes the use of finite element methods to numerically solve Burgers' equation. It introduces finite element methods and the Galerkin method for approximating solutions. MATLAB codes are presented to solve example boundary value problems and differential equations. The method of quasi-linearization is also described for solving Burgers' equation numerically. The report concludes that finite element methods can accurately predict numerical solutions that are close to exact solutions for problems where no closed-form solution exists.
The document discusses various techniques for clustering and dimensionality reduction of web documents. It introduces machine learning clustering methods like k-means clustering and discusses challenges like handling different cluster sizes and shapes. It also covers dimensionality reduction methods like principal component analysis (PCA) and locality-sensitive hashing that can be used to cluster high dimensional web document datasets by reducing their dimensionality.
The document presents a new method called the zero average method (ZAM) for solving generalized fuzzy transportation problems where transportation costs, demand, and supply are represented by generalized trapezoidal fuzzy numbers. The ZAM assumes uncertainty in the decision-maker's knowledge of precise values. Preliminaries on fuzzy numbers and arithmetic operations are provided. A numerical example is then solved to illustrate the application of the ZAM. The proposed ZAM is described as easy to understand and apply to real-life transportation problems.
In this paper person identification is done based on sets of facial images. Each facial image is considered as the scattered point of logistic regression. The vertical distance of scattered point of facial image and the regression line is considered as the parameter to determine whether the image is of same person or not. The ratio of Euclidian distance (in terms of number of pixel of gray scale image based on ‘imtool’ of Matlab 13.0) between nasal and eye points are determined. The variance of the ration is considered another parameter to identify a facial image. The concept is combined with ghost image of Principal Component Analysis; where the mean square error and signal to noise ratio (SNR) in dB is considered as the parameters of detection. The combination of three methods, enhance the degree of accuracy compared to individual one.
This document provides a course calendar for a machine learning course with the following contents:
- The course covers topics like Bayesian estimation, Kalman filters, particle filters, hidden Markov models, Bayesian decision theory, principal component analysis, independent component analysis, and clustering algorithms over 13 classes between September and January.
- One lecture plan discusses nonparametric density estimation approaches like histogram density estimation, kernel density estimation, and k-nearest neighbor density estimation. It also covers cross-validation techniques.
- Another document section provides an example of applying kernel density estimation and k-nearest neighbor classification to automatically sort fish based on lightness, including discussing training and test phase classification. It compares different bandwidths and values of k.
This document proposes a new approximate approach for computing the optimal optimistic decision within possibilistic networks. The approach avoids transforming the initial graph into a junction tree, which is computationally expensive. Instead, it performs the computation by calculating the degree of normalization in the moral graph resulting from merging the possibilistic network representing the agent's beliefs and the one representing its preferences. This allows the approach to have polynomial complexity compared to the exact approach based on junction trees, which is NP-hard.
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...BRNSS Publication Hub
In this paper, a proposed method, namely, zero average method is used for solving fuzzy transportation problems by assuming that a decision-maker is uncertain about the precise values of the transportation costs, demand, and supply of the product. In the proposed method, transportation costs, demand, and supply are represented by generalized trapezoidal fuzzy numbers. To illustrate the proposed method, a numerical example is solved. The proposed method is easy to understand and apply to real-life transportation problems for the decision-makers.
Transportation Problem with Pentagonal Intuitionistic Fuzzy Numbers Solved Us...IJERA Editor
This paper presents a solution methodology for transportation problem in an intuitionistic fuzzy environment in
which cost are represented by pentagonal intuitionistic fuzzy numbers. Transportation problem is a particular
class of linear programming, which is associated with day to day activities in our real life. It helps in solving
problems on distribution and transportation of resources from one place to another. The objective is to satisfy
the demand at destination from the supply constraints at the minimum transportation cost possible. The problem
is solved using a ranking technique called Accuracy function for pentagonal intuitionistic fuzzy numbers and
Russell’s Method
In this paper, the L1 norm of continuous functions and corresponding continuous estimation of regression parameters are defined. The continuous L1 norm estimation problem of one and two parameters linear models in the continuous case is solved. We proceed to use the functional form and parameters of the probability distribution function of income to exactly determine the L1 norm approximation of the corresponding Lorenz curve of the statistical population under consideration.
Similar to International Journal of Engineering Research and Development (IJERD) (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
This document summarizes a study on reducing corrosion rates in steel through welding design. The researchers tested different welding groove designs (X, V, 1/2X, 1/2V) and preheating temperatures (400°C, 500°C, 600°C) on ferritic malleable iron samples. Testing found that X and V groove designs with 500°C and 600°C preheating had corrosion rates of 0.5-0.69% weight loss after 14 days, compared to 0.57-0.76% for 400°C preheating. Higher preheating reduced residual stresses which decreased corrosion. Residual stresses were 1.7 MPa for optimal X groove and 600°C
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
The document summarizes a study on the use of ground granulated blast furnace slag (GGBS) and limestone powder to replace cement in self-compacting concrete (SCC). Tests were conducted on SCC mixes with 0-50% replacement of cement with GGBS and 0-20% replacement with limestone powder. The results showed that replacing 30% of cement with GGBS and 15% with limestone powder produced SCC with the highest compressive strength of 46MPa, meeting fresh property requirements. The study concluded that this ternary blend of cement, GGBS and limestone powder can improve SCC properties while reducing costs.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
International Journal of Engineering Research and Development (IJERD)
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 7, Issue 9 (July 2013), PP. 22-28
22
Transportation Programming Under Uncertain
Environment
Subhakanta Dash1
, S. P. Mohanty 2
1
Department. of Mathematics, Silicon Institute of Technology, Bhubaneswar, India
2
Department of Mathematics, College of Engineering and Technology, Bhubaneswar, India.
Abstract:- In a transportation problem the cost of transportation from source to destination depends on many
factors which may not be deterministic in nature. In this work, the unit cost of transportation is considered to be
uncertain and the rough cost is assigned to the model. In the solution of such a model, two solutions as
pessimistic and optimistic solutions are obtained which suggest a range for the actual solution. Further, using
uncertain distribution, one compromise solution process is also proposed.
Keywords:- Transportation Problem, Rough set, Rough variable, Uncertainty measure, Uncertain distribution
I. INTRODUCTION
Transportation problem is one of the important linear programming problems. Many efficient methods
have been developed to solve a transportation problem where the unit cost of transportation, demand and supply
are all deterministic. In many practical situations, the unit cost of transportation may not deterministic in nature.
The cost depends upon many factors like road condition, traffic load causing delay in delivery, break down of
vehicles, labor problem for loading & unloading, which may increase the cost. All these factors are uncertain
which cannot be predicted beforehand due to insufficient information, unpredictable environment.
Transportation problem in various types of uncertain environment such as fuzzy, random are studied by
many researchers. In this paper we have developed a transportation model with rough and uncertain
environment.
Rough set theory was first developed by Pawlak[12] in 1982. Since then many researchers have
developed the theoretical aspects and applied the concept to solve problems related to data mining, data
development analysis, neural network, signal processing etc.
B.Liu [1] has proposed the uncertainty theory to deal with the problems related to uncertain
environment. B Liu [2] has also proposed the concept of rough variable which is a measurable function from
rough space to the set of real numbers.
Many researchers have used concept of rough variables in some optimization problems. Xu and Yao
[8] studied a two-person zero-sum matrix games with payoffs as rough variables. Youness[5] introduced a
rough programming problem considering the decision set as a rough set. Xu et al [9] proposed a rough DEA
model to solve a supply chain performance evaluation problem with rough parameters. Xiao and Lai [11] have
considered power-aware VLIW instruction scheduling problem with power consumption parameters as rough
variables. Kundu [10] proposed some solid transportation models with crisp and rough costs. In this paper we
have formulated transportation problems considering the unit cost of transportation from source to destination as
rough variables where we get pessimistic and optimistic solution of the problem under certain level of trust(level
of confidence). Again using the uncertain distribution, we have proposed solution method for a compromise
solution.
The rest of the paper is organized as follows: In Section II, we provide some definitions and properties
of rough set, rough variable and uncertain distribution. In Section III, we present our proposed transportation
problems and model formulation. The solution procedure is discussed in section IV. One numerical example is
given in section V and finally the conclusion and scope of further work is given in section VI.
II. PRELIMINARIES
A. Rough set:
Rough set theory developed by Z. Pawlak[12],[13] is a mathematical tool for dealing with uncertain
and incomplete data without any prior knowledge about the data. We deal only with the available information
provided by the data to generate conclusion.
Let U be a finite non empty set called the universal and let R be a binary relation defined on U. Let R be a
equivalence relation and R(x) be the equivalence class of the relation which contain x. R shall be referred as
indiscernibility relation.
2. Transportation Programming Under Uncertain Environment
23
For any X U, the lower and upper approximation of X is defined by
R(X) U : R(x) X , R(X) = U : R(x) Xx x .
The lower approximation R(X) is exact set contained in X so that the object in R(X) are members
of x with certainty on the basis of knowledge in R, where the objects in the upper approximation R(X) can be
classified as possible members of X. The difference between the upper and lower approximation of X will be
called as R-boundary of X and is defined by RBN (X) = R(X) R(X) .
The set X is R-exact if RBN (X) = , otherwise the set is R- rough set.
B. Rough variable
The concept of rough variable is introduced by Liu[2]as uncertain variable. The following definitions are based
on Liu[2].
Definition 1 : Let be a non empty set, A be an - algebra of subsets of , be an element in A , and
be a non negative, real- valued, additive set function on A. Then , , ,A is called a rough space.
Definition 2: A rough variable on the rough space , , ,A is a measurable function from to the
set of real numbers such that for every Borel set B of , we have | B A . Then the
lower and upper approximation of the rough variable are defined as follows
}|)({,}|)({
Definition 3: Let , , ,A be a rough space. Then the upper and lower trust of event A is defined by
A A
Tr A and Tr A
The trust of the event A is defined as
1
Tr A Tr A T r A
2
The trust measure satisfies the followings:
c
Tr 1 , Tr 0
Tr A Tr B where A B
Tr A Tr A 1
Definition.4: Let 1 2, be rough variables defined on the rough space , , ,A . Then their sum and
product are defined as
1 2 1 2
1 2 1 2
+
. .
Definition 5: Let be rough variables defined on the rough space , , ,A and (0,1] then
sup sup r | Tr r is called -optimistic value of .
inf inf r | Tr r is called -pessimistic value of .
Definition 6: Let be rough variables defined on the rough space , , ,A . The expected value of is
defined by
0
0
E Tr Trr dr r dr
Definition 7: The trust density function : R [0, )f of a rough variable is a function such that
x ( )
x
f y dy
holds for all x , , where is trust distribution of . If = a,b , ,c d
be a rough variable such that c a < b d, then the trust distribution x Tr x is
3. Transportation Programming Under Uncertain Environment
24
0 if x c
x
if c x a
2
(b a)+(d c) 2
x if a x b
2
x+d 2c
c
d c
x ac ad bc
b a d c
if b x d
2
1 if x d
d c
And the trust density function is defined as
1
if c x or b x d
2 d c
1 1
f(x) = + if a x b
2 b c 2 d c
0 otherwise
a
α-optimistic value to = a,b , ,c d is
sup
d b
1 2 2 , if ;
2
2d
2 1 2 1 , if ;
2
2
, otherwise.
d c
d c
a c
d c
d c
d b a b d c b a d c
b a d c
α-pessimistic value to = a,b , ,c d is
inf
a c
1 2 2 , if ;
2
b+d 2c
2 1 2 1 , if ;
2
2
, otherwise.
c d
d c
c d
d c
c b a a d c b a d c
b a d c
The expected value of is )(
4
1
)( dcbaE
C . Uncertainty Theory
B.Liu [1],[2] has developed uncertainty theory which is considered as a new approach to deal with
indeterminacy factors when there is a lack of observed data. In this section, some basic concepts of uncertainty
theory has been reviewed which shall be used to establish a compromise solution of transportation problem
under uncertainty.
Uncertainty Measure
Let L be a - algebra on a nonempty set . A set function M : L [0,1] is called an uncertain measure if it
satisfies the following axioms
Axiom 1: (Normality axiom) M () = 1 for the universal set
Axiom 2: (Duality axiom) c
M M( ) 1 for every event
Axiom 3: (sub additive axiom) For every countable sequence of events 1 2, ,..... we have
i i
1i=1
M M
i
The triplet , ,L is called an uncertain space.
Axiom 4: (Product measure) Let , ,k k kL be uncertainty spaces for k = 1, 2……The product uncertain
measure M is an uncertain measure satisfying
)(}{ 1
1
kkk
k
k MM
where, k an arbitrary chosen events for kL for k = 1,2…… respectively.
4. Transportation Programming Under Uncertain Environment
25
Uncertain Variable
An uncertain variable is essentially a measurable function from an uncertainty space to the set of real
numbers. Let be an uncertain variable. Then the uncertainty distribution of is defined as
x M x for any real number x. An uncertain variable is called linear if it has linear uncertainty
distribution L(a,b)
0 if x a
x
x if a x b
1 if x > b
a
b a
where, a and b are real numbers with a < b.
An uncertain distribution is said to be regular if its inverse function 1
exists and is unique for
each 0,1 . The linear uncertainty distribution L(a, b) is regular and its inverse uncertainty distribution is
1
1 a b
.
III. DESCRIPTION OF THE PROBLEM AND MODEL FORMULATION
A. Crisp Model
Suppose that there are m sources and n destinations. Let ia be the number of supply units available at
source i( i= 1,2….m) and let bj be the number of demands units required at destination j ( j= 1,2….n). Let cij
represents the unit transportation cost for transportation the units from source i to destination j. The objective is
to determine the number of units to be transported from source i to destination j, so that the total transportation
cost is minimum. Let xij be the decision variable which denotes the number of units shipped from source i to
destination j.
m
ij ij
i = 1 1
n
ij
j = 1
m
ij
i = 1
Minimize Z = c x
subject to constraint
, i = 1,2 ,......,m. (1)
, j = 1,2 ,......,n. (2)
and
n
j
i
j
x a
x b
ijx 0 (3)
B. Rough Model
If the cost cij is a rough variable of the form
2 3 1 4
ij
1 2 3 4
Let c , , , ,
where
ij ij ij ij
ij ij ij ij
c c c c
c c c c
and the objective function becomes a rough variable of the form
2 3 1 4
m
r r
ij ij
i = 1 1
Z = Z , Z , Z , Z
where Z c x , r = 1,2,3,4
n
j
Then the rough model of transportation problem becomes
Minimize Z
s.t the constraints (1) – (3).
IV. SOLUTION PROCEDURE
A. Solution for rough model
Since the objective function of transportation problem with rough cost, so we minimize the smallest
objective Z satisfying Tr Z Z where 0,1 is a specified trust (confidence level) i,e. we
5. Transportation Programming Under Uncertain Environment
26
minimize the α- pessimistic value infZ of Z . This implies that the optimum objective value will below
the Z with a trust level at least . So the problem becomes Min(Min Z) .
From the definition of the α- pessimistic value, the above programming becomes
Min Z/
s.t the constraints (1)-(3).
where,
2 1
1 4
4 1
3 4 1
/ 1 4
4 1
1 3 2 2 4 1 3 2 4
Z Z
1 2 Z 2 Z , if
2 Z Z
Z 2Z
2 1 Z 2 1 Z , if
2 Z Z
2
Z
Z
Z Z Z Z Z Z Z Z Z
1
3 2 4 1
, otherwise
Z
Z Z Z Z
Now we formulate another model to minimize the greatest objective Z satisfying Tr Z Z where
0,1 is a specified trust (confidence level) i,e. we minimize the α- optimistic value supZ of Z .
This implies that the optimum objective value will below the Z with a trust level at least . So the model
becomes Min(Max Z). From the definition of the optimistic value, the above problem becomes
Min Z//
s.t the constraints (1)- (3)
where,
4 3
4 1
4 1
4 2 1
/ / 4 1
4 1
4 3 2 3 4 1 3 2 4
Z Z
1 2 Z 2 Z , if
2 Z Z
2Z Z
2 1 Z 2 1 Z , if
2 Z Z
2
Z
Z
Z Z Z Z Z Z Z Z Z
1
3 2 4 1
, otherwise
Z
Z Z Z Z
Since for 0.5 < α 1, inf supZ Z , so solving the problem with trust level α, we conclude that the
optimum objective value lie within the range
// /
Z ,Z with trust level at least α.
B. Compromise Solution
In solving the rough model of transportation problem, we find two solutions as the α- pessimistic solution and
the α-optimistic solution. But in many cases the decision maker shall prefer one set of solution rather being
confused with two sets of solutions. In this section, we proposed one solution procedure as a compromise
solution. When the unit cost of transportation from ith source to jth destination is a rough variable as
2 3 1 4
ij
1 2 3 4
, , ,
where
ij ij ij ij
ij ij ij ij
c c c c c
c c c c
We find the α- pessimistic solution as p and the α-optimistic solution as q which gives two real values, where
p < q and corresponding we get an interval [p,q].
We define an uncertain variable in the interval [p,q] whose distribution function is defined as a linear regular
uncertain distribution define by Liu [1].
0 if x p
x
x if p x q
1 if x q
p
q p
The inverse uncertainty distribution will give a crisp value cij as
-1
ij, 1c p q
In the above process we will get a set of deterministic cost cij from the set of rough costs. Then the model can be
solved to give a solution which can be considered as a compromise solution.
6. Transportation Programming Under Uncertain Environment
27
V. NUMERICAL EXAMPLE
A. Problems with unit transportation costs as crisp numbers
Consider a problem with three sources ( i= 1,2,3) and three destinations ( j = 1,2,3).The unit transportation costs,
the availability at each source, demands of each destination are given in the table:
Table I: Unit Transportation Cost cij in Crisp
Availability and Demands : a1=14, a2=12,a3=15,b1=21,b2=12 and b3=8
Then the optimum solution is given by
x11=14 , x22=4, x23=8, x31=7, x32=8 and Minimum Z = 243.
B. Problems with unit transportation costs as rough variable
Consider a problem with three sources ( i= 1,2,3) and three destinations ( j = 1,2,3).The unit transportation costs
are rough variables, the availability at each source, demands of each destination are crisp numbers which is
given in the table:
Table II: Unit Transportation Cost cij in Rough Variable
ji 1 2 3
1 ([7,9],[6,10]) ([12,14],[11,15]) ([10,11],8,12])
2 ([4,5],[3,6]) ([3,4],[2,7]) ([5,7],[4,9])
3 ([5,6],[3,7]) ([2,3],[1,4]) ([10,11],[9,12])
Availability and Demands: a1=14, a2=12,a3=15,b1=21,b2=12 and b3=8
Then the optimum solution is given in rough values as Z = ([193,256],[149,297]).
If we take the trust value α = 0.8, then the α-pessimistic optimal solution will be Z/
=250.56.And the α-optimistic
optimal solution will be Z//
=197.54. So the optimum objective value lie within the range [Z//
,Z/
] i.e, [197.56,
250.56] when the trust value (confidence level) is 0.8.
C. Compromise Solution
(Problems with unit transportation costs as rough variable)
Consider a problem with three sources ( i= 1,2,3) and three destinations ( j = 1,2,3).The unit transportation costs
are rough variables, the availability at each source, demands of each destination are crisp number and the trust
value is 0.8 which is given in the table II.
The cost value cij in terms of α-optimistic and α-pessimistic values with α=0.8 are calculated and shown in the
following table
j/i 1 2 3
1 [7.2, 8.8] [12.2,13.8] [9.6,10.88]
2 [4.05, 4.95] [3.17, 5] [5.28,7]
3 [4.6, 5.88] [2.2, 2.95] [10.05,10.95]
Each cost cij can be considered as an uncertain variable lies in the interval [p , q], where p is the α-optimistic
value and q is the α-pessimistic value of the corresponding uncertain cost. Considering the linear inverse
uncertain distribution, the interval of uncertainty is transformed into a crisp value which is shown in the
following table.
The corresponding optimal solution with trust value 0.8 is calculated as:
x11=14, x21=4, x23=8, x31=3, x32=12 and the Minimum Z = 241.52.
ji 1 2 3
1 8 13 11
2 6 4 7
3 5 3 10
j/i 1 2 3
1 8.48 13.48 10.624
2 4.77 4.634 6.656
3 5.624 2.8 10.77
7. Transportation Programming Under Uncertain Environment
28
The minimum value of Z under compromise solution lies within the range of objective values as obtained by
rough model.
VI. CONCLUSION
In this work, the cost of transportation from the source to destination is considered to be uncertain and rough
costs are assigned. The availability as well as the demand is considered to be deterministic. Using the
uncertainty distribution the compromise solution is also proposed for the problem. Further investigation of the
work can be made by considering the availability and demand as uncertain.
REFERENCES
[1]. B. Liu, Uncertainty Theory, Springer-Verlag, Berlin, 2007
[2]. B. Liu, Theory and Practice of Uncertain Programming, Physical-Verlag, Heidelberg, 2012
[3]. B. Liu, Uncertainty Theory: An Introduction to its Axiomatic Foundation, Springer-Verlag, Berlin,
2004
[4]. B. Liu, “Inequalities and convergence concepts of fuzzy and Rough variables,” Fuzzy Optimization
and and Decision Making, vol. 2, pp. 87-100, 2003
[5]. E. Youness, ”Characterizing solutions of Rough Programming Problems,” European Journal of
Operational Research, vol.168, pp. 1019-1029, 2006
[6]. F. Jimenez, J. l. Verdegay, “Uncertain Solid Transportation problems”, Fuzzy sets and Systems, vol.
100, pp. 45-57, 1998
[7]. G. Liu, W. Zhu,”The Algebraic Structures of Generalized Rough Set Theory”, Information Sciences,
vol. 178, pp. 4105-4113, 2008
[8]. J. Xu, L. Yao, ”A Class of Two-Person Zero Sum Matrix Games with Rough payoffs”, International
Journal of Mathematics and Mathematical Sciences, doi: 10.1155/ 2010/ 404792, 2010
[9]. J. Xu, B. Li, D. Wu, “Rough data Envelopment Analysis and its Application to Supply Chain
Performance Evaluation”, International Journal of Production Economics, vol. 122, pp. 628-638, 2009
[10]. P. Kundu, S. Kar, M. Maiti, “Some Solid transportation Model with Crisp and Rough Costs”, World
Academy od Science, Engineering and Technology, vol. 73, pp. 185-192, 2013
[11]. Shu, Xiao, Edmund M.K. Lai, “A Rough Programming Approach to Power-Aware VLIW Instruction
Sheduling for Digital Signal Processors, Proc ICASSP, 2005
[12]. Z. Pawlak, “Rough Sets”, International Journal of Information and Computer Science, vol.11(5), pp.
341-356, 1982
[13]. Z. Pawlak, A. Skowron, “Rough Sets, some Extension”, Information Sciences, vol.177, pp. 28-40,
2007