We continuously innovate and update the HighScore suite to offer you the most comprehensive and
user-friendly toolbox for XRD. In the newest release (version 4.5) of the suite, various new functions
have been added to HighScore:
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
Big Linked Data Interlinking - ExtremeEarth Open WorkshopExtremeEarth
This document discusses approaches for interlinking big linked geospatial data. It describes detecting proximity and topological relations between geospatial entities. Examples of topological relations identified include a linestring touching a polygon, a linestring intersecting another linestring, and a polygon containing another polygon. The document also discusses challenges like quadratic time complexity and introduces techniques like progressive and approximate geospatial interlinking to improve efficiency and scalability.
This manual includes very basic features of Remote Sensing like LAYER STACK, BASIC FUNCTION, MULTISPECTRAL BAND COMBINATION& IMAGE ENHANCEMENT, IMAGE PRE-PROCESSING, BASIC IMAGE CORRECTION, NDVI, SUPERVISED AND UNSUPERVISED CLASSIFICATION, MOSAIC, MAP PRODUCTION
This presentation is a comparison of different clustering based on their computational time. This is the first step in creating open source and bespoke Geodemographic classifications in near real time.
Markless registration for scans of free form objectsArtemis Valanis
This document proposes a markerless registration method for registering partial scans of free-form objects. It describes a constrained acquisition process where scans are taken with a small amount of overlap by rotating the scan head vertically or horizontally. The method samples the unknown rotation angle space to approximate the relative transformation between scans. It evaluates the median distance between overlapping points to find the best alignment. The algorithm is validated on scan data and shown to initialize ICP registration accurately without markers or features.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
A Data Fusion System for Spatial Data Mining, Analysis and Improvement Silvij...Beniamino Murgante
The document describes a data fusion system that automatically fuses imperfect geospatial data from multiple sources to produce a single, higher quality dataset. The system has three main components - preprocessing input data, filtering/fusing the data, and validating the merged output. It uses a modular architecture and processes data through conversion, analysis, relationship detection, attribute transfer, and quality assessment steps. The system provides both command line and graphical user interfaces and aims to improve on existing data through automated harmonization.
Concurrent Ternary Galois-based Computation using Nano-apex Multiplexing Nibs...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their corresponding carbon-based field emission controlled switching is introduced in this article. The formalistic ternary nano-based implementation utilizes recent findings in field emission and nano applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via field-emission methods. The presented work implements multi-valued Galois functions by utilizing concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first part of the article. The introduced computational extension utilizing many-to-one carbon field-emission devices will be further utilized in implementing congestion-free architectures within the third part of the article. The emerging nano-based technologies form important directions in low-power compact-size regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using much less power than silicon-based devices. Applications include low-power design of VLSI circuits for signal processing and control of autonomous robots.
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
Big Linked Data Interlinking - ExtremeEarth Open WorkshopExtremeEarth
This document discusses approaches for interlinking big linked geospatial data. It describes detecting proximity and topological relations between geospatial entities. Examples of topological relations identified include a linestring touching a polygon, a linestring intersecting another linestring, and a polygon containing another polygon. The document also discusses challenges like quadratic time complexity and introduces techniques like progressive and approximate geospatial interlinking to improve efficiency and scalability.
This manual includes very basic features of Remote Sensing like LAYER STACK, BASIC FUNCTION, MULTISPECTRAL BAND COMBINATION& IMAGE ENHANCEMENT, IMAGE PRE-PROCESSING, BASIC IMAGE CORRECTION, NDVI, SUPERVISED AND UNSUPERVISED CLASSIFICATION, MOSAIC, MAP PRODUCTION
This presentation is a comparison of different clustering based on their computational time. This is the first step in creating open source and bespoke Geodemographic classifications in near real time.
Markless registration for scans of free form objectsArtemis Valanis
This document proposes a markerless registration method for registering partial scans of free-form objects. It describes a constrained acquisition process where scans are taken with a small amount of overlap by rotating the scan head vertically or horizontally. The method samples the unknown rotation angle space to approximate the relative transformation between scans. It evaluates the median distance between overlapping points to find the best alignment. The algorithm is validated on scan data and shown to initialize ICP registration accurately without markers or features.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
A Data Fusion System for Spatial Data Mining, Analysis and Improvement Silvij...Beniamino Murgante
The document describes a data fusion system that automatically fuses imperfect geospatial data from multiple sources to produce a single, higher quality dataset. The system has three main components - preprocessing input data, filtering/fusing the data, and validating the merged output. It uses a modular architecture and processes data through conversion, analysis, relationship detection, attribute transfer, and quality assessment steps. The system provides both command line and graphical user interfaces and aims to improve on existing data through automated harmonization.
Concurrent Ternary Galois-based Computation using Nano-apex Multiplexing Nibs...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their corresponding carbon-based field emission controlled switching is introduced in this article. The formalistic ternary nano-based implementation utilizes recent findings in field emission and nano applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via field-emission methods. The presented work implements multi-valued Galois functions by utilizing concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first part of the article. The introduced computational extension utilizing many-to-one carbon field-emission devices will be further utilized in implementing congestion-free architectures within the third part of the article. The emerging nano-based technologies form important directions in low-power compact-size regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using much less power than silicon-based devices. Applications include low-power design of VLSI circuits for signal processing and control of autonomous robots.
This document discusses considerations for the UK spatial data infrastructure (SDI) lifecycle and supply chain. It introduces concepts like INSPIRE and the UK Location Strategy. It outlines the many organizations involved in geospatial data provision in the UK and challenges around data sharing, ownership and transforming data to comply with INSPIRE specifications without changing native data models. The document proposes a framework for INSPIRE transformation that works with existing data and tools to help organizations start providing compliant data while identifying areas for future improvement.
Big Linked Data Federation - ExtremeEarth Open WorkshopExtremeEarth
The document summarizes three years of work on the ExtremeEarth project. It describes Semagrow, a federated query processor that can seamlessly integrate data from multiple remote geospatial dataset servers. It was enhanced during ExtremeEarth to federate multiple geospatial sources. The document also describes KOBE, a benchmarking engine for federated query processors. It was re-engineered to run on containers. Finally, the document outlines three ExtremeEarth use cases involving Semagrow, including validating land usage data and combining snow cover and crop type data.
A Novel Route Optimized Cluster Based Routing Protocol for Pollution Controll...IRJET Journal
This document summarizes a research paper that proposes a novel cluster-based routing protocol for sensor networks aimed at pollution monitoring. The protocol aims to optimize routes while maintaining moderate energy efficiency. It first discusses challenges in sensor network routing related to dynamic topology and limited device power. It then outlines two lemmas: 1) reducing cluster head detection time can decrease routing time; and 2) changing cluster heads randomly over time can improve network lifetime. The proposed protocol selects cluster heads randomly based on energy levels and predicts cluster heads to reduce overhead. It aims to optimize routing time and energy efficiency through these techniques.
This document proposes ResNeSt, a split-attention network that divides feature maps into groups and applies attention mechanisms across groups. It outperforms ResNet variants on image classification, object detection, semantic segmentation, and instance segmentation while maintaining the same computational efficiency. The paper introduces ResNeSt's split attention block, training strategies including large batches, data augmentation, and regularization methods. Evaluation shows ResNeSt achieves state-of-the-art accuracy on ImageNet and downstream tasks using less computation than NAS models.
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
In this paper we propose a new test statistic for unsupervised
change detection in polarimetric synthetic aperture radar (PolSAR)
data. We work with multilook complex (MLC) covariance
matrix data, whose underlying model is assumed to be
the scaled complex Wishart distribution. We use the complex
kind Hotelling-Lawley (HL) trace statistic for measuring the
similarity of two covariance matrices. The sampling distribution
of the HL trace is approximated by a Fisher-Snedecor
distribution, which is used to define the significance level of a
constant false alarm rate change detector. The performance of
the proposed method is tested on simulated and real PolSAR
data sets and compared to the likelihood ratio test statistic
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
1. The document discusses tracking multiple moving objects using the Fast Level Set Method (FLSM). The FLSM uses an adaptive narrow band method and extension velocity field to efficiently update the level set function.
2. It proposes a labeling method to distinguish between objects using an inertial-based extension velocity field. Key situations like objects entering/exiting and merging/splitting are considered.
3. Previously, distinction was based on contour moments and center of gravity. The new method aims to handle more complex situations like merging and splitting using the object's motion history from the extension velocity field.
This document describes a project to parallelize the generation of wind rose plots from climatological data. It discusses two algorithms - one that stores filtered raw data in a local container, and one that does not. It explores various parallelization strategies using OpenMP and MPI to improve performance. Strategies include dividing data reading, processing, and output computation across threads. The best strategy achieved a speedup of 2.04x on a single machine with four threads for 200GB of input data.
Subgraph Matching for Resource Allocation in the Federated Cloud EnvironmentAtakanAral
Federated clouds and cloud brokering allow migration
of virtual machines across clouds and even deployment
of cooperating VMs in different cloud data centers. In order
to fully benefit from these new opportunities, we propose a
heuristic that outputs a matching between virtual machine and
cloud data centers by taking resource capacities, VM topologies,
performance and resource costs into account. Results of
our initial evaluation using the CloudSim Framework indicate
that, proposed heuristic is promising for a better optimized
placement of networked VM groups onto the federated cloud
topology.
Data performance characterization of frequent pattern mining algorithmsIJDKP
Big data quickly comes under the spotlight in recent years. As big data is supposed to handle extremely
huge amount of data, it is quite natural that the demand for the computational environment to accelerates,
and scales out big data applications increases. The important thing is, however, the behavior of big data
applications is not clearly defined yet. Among big data applications, this paper specifically focuses on stream mining applications. The behavior of stream mining applications varies according to the characteristics of the input data. The parameters for data characterization are, however, not clearly defined yet, and there is no study investigating explicit relationships between the input data, and streammining applications, either. Therefore, this paper picks up frequent pattern mining as one of the
representative stream mining applications, and interprets the relationships between the characteristics of the input data, and behaviors of signature algorithms for frequent pattern mining.
A Subgraph Pattern Search over Graph DatabasesIJMER
The document discusses methods for continuous subgraph pattern searching over graph databases and graph streams. It proposes using Node-Neighbor Trees (NNTs) to represent local graph structures, and projecting NNTs to numerical vectors to enable approximate subgraph isomorphism checking. It also describes how to handle uncertain graph streams by computing probability upper bounds to filter out graph stream-query pairs that are unlikely to match. The overall approach conducts structural filtering followed by probability pruning to reduce the search space when capturing patterns over uncertain graph streams.
CNN Lithology Prediction (Undergrad Thesis Jeremy Adi Padma Nagara - Universi...Jeremy Adi
This presentation will shows you that you can apply Machine Learning / Deep Learning in many fields. For this time, I use Deep Learning technique, which is Convolutional Neural Network, to tackle the problem in Geophysics Field (Gas Exploration - Oil and Gas Industry).
For the full resources, you can check it out here
https://github.com/Jeremy-Adi/CNN-Lithology-Classification
The document compares the performance of a Root Raised Cosine matched filter implemented using hybrid-logarithmic arithmetic versus standard binary and floating point arithmetic. Simulations showed that the hybrid logarithmic structure offered superior performance to fixed point solutions while having significantly reduced complexity compared to floating point equivalents. The use of hybrid logarithmic arithmetic also has the potential to reduce power consumption, latency, and hardware complexity for mobile applications.
Edge Representation Learning with HypergraphsMLAI2
Graph neural networks have recently achieved remarkable success in representing graph-structured data, with rapid progress in both the node embedding and graph pooling methods. Yet, they mostly focus on capturing information from the nodes considering their connectivity, and not much work has been done in representing the edges, which are essential components of a graph. However, for tasks such as graph reconstruction and generation, as well as graph classification tasks for which the edges are important for discrimination, accurately representing edges of a given graph is crucial to the success of the graph representation learning. To this end, we propose a novel edge representation learning framework based on Dual Hypergraph Transformation (DHT), which transforms the edges of a graph into the nodes of a hypergraph. This dual hypergraph construction allows us to apply message-passing techniques for node representations to edges. After obtaining edge representations from the hypergraphs, we then cluster or drop edges to obtain holistic graph-level edge representations. We validate our edge representation learning method with hypergraphs on diverse graph datasets for graph representation and generation performance, on which our method largely outperforms existing graph representation learning methods. Moreover, our edge representation learning and pooling method also largely outperforms state-of-theart graph pooling methods on graph classification, not only because of its accurate edge representation learning, but also due to its lossless compression of the nodes and removal of irrelevant edges for effective message-passing. Code is available at https://github.com/harryjo97/EHGNN.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Rietveld refinement is a widely used technique for determining crystal structures and quantifying crystalline materials from powder diffraction data. It works by minimizing the difference between observed and calculated diffraction patterns using least squares refinement. Key aspects include modeling the background, peak shape, unit cell parameters, atomic positions, and other structural details. Common software packages are used to perform the iterative refinement calculations.
El documento trata sobre los cristales, incluyendo su definición, cómo se forman, y las propiedades y estructuras cristalinas de varios compuestos como el sulfato de aluminio, sulfato de cobre y bórax. Explica que los cristales son la forma más altamente organizada de la materia no viviente y pueden formarse por disolución, fusión o sublimación. Además, describe los procedimientos para cultivar semillas cristalinas de estos compuestos y observar su crecimiento.
Cấu trúc nhân cách được hiểu là những yếu tố tương đối ổn định trong tâm lý con người, và cách mà các yếu tố này liên kết với nhau, tác động qua lại với nhau và chi phối hành vi của con người.
This document discusses considerations for the UK spatial data infrastructure (SDI) lifecycle and supply chain. It introduces concepts like INSPIRE and the UK Location Strategy. It outlines the many organizations involved in geospatial data provision in the UK and challenges around data sharing, ownership and transforming data to comply with INSPIRE specifications without changing native data models. The document proposes a framework for INSPIRE transformation that works with existing data and tools to help organizations start providing compliant data while identifying areas for future improvement.
Big Linked Data Federation - ExtremeEarth Open WorkshopExtremeEarth
The document summarizes three years of work on the ExtremeEarth project. It describes Semagrow, a federated query processor that can seamlessly integrate data from multiple remote geospatial dataset servers. It was enhanced during ExtremeEarth to federate multiple geospatial sources. The document also describes KOBE, a benchmarking engine for federated query processors. It was re-engineered to run on containers. Finally, the document outlines three ExtremeEarth use cases involving Semagrow, including validating land usage data and combining snow cover and crop type data.
A Novel Route Optimized Cluster Based Routing Protocol for Pollution Controll...IRJET Journal
This document summarizes a research paper that proposes a novel cluster-based routing protocol for sensor networks aimed at pollution monitoring. The protocol aims to optimize routes while maintaining moderate energy efficiency. It first discusses challenges in sensor network routing related to dynamic topology and limited device power. It then outlines two lemmas: 1) reducing cluster head detection time can decrease routing time; and 2) changing cluster heads randomly over time can improve network lifetime. The proposed protocol selects cluster heads randomly based on energy levels and predicts cluster heads to reduce overhead. It aims to optimize routing time and energy efficiency through these techniques.
This document proposes ResNeSt, a split-attention network that divides feature maps into groups and applies attention mechanisms across groups. It outperforms ResNet variants on image classification, object detection, semantic segmentation, and instance segmentation while maintaining the same computational efficiency. The paper introduces ResNeSt's split attention block, training strategies including large batches, data augmentation, and regularization methods. Evaluation shows ResNeSt achieves state-of-the-art accuracy on ImageNet and downstream tasks using less computation than NAS models.
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
In this paper we propose a new test statistic for unsupervised
change detection in polarimetric synthetic aperture radar (PolSAR)
data. We work with multilook complex (MLC) covariance
matrix data, whose underlying model is assumed to be
the scaled complex Wishart distribution. We use the complex
kind Hotelling-Lawley (HL) trace statistic for measuring the
similarity of two covariance matrices. The sampling distribution
of the HL trace is approximated by a Fisher-Snedecor
distribution, which is used to define the significance level of a
constant false alarm rate change detector. The performance of
the proposed method is tested on simulated and real PolSAR
data sets and compared to the likelihood ratio test statistic
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
1. The document discusses tracking multiple moving objects using the Fast Level Set Method (FLSM). The FLSM uses an adaptive narrow band method and extension velocity field to efficiently update the level set function.
2. It proposes a labeling method to distinguish between objects using an inertial-based extension velocity field. Key situations like objects entering/exiting and merging/splitting are considered.
3. Previously, distinction was based on contour moments and center of gravity. The new method aims to handle more complex situations like merging and splitting using the object's motion history from the extension velocity field.
This document describes a project to parallelize the generation of wind rose plots from climatological data. It discusses two algorithms - one that stores filtered raw data in a local container, and one that does not. It explores various parallelization strategies using OpenMP and MPI to improve performance. Strategies include dividing data reading, processing, and output computation across threads. The best strategy achieved a speedup of 2.04x on a single machine with four threads for 200GB of input data.
Subgraph Matching for Resource Allocation in the Federated Cloud EnvironmentAtakanAral
Federated clouds and cloud brokering allow migration
of virtual machines across clouds and even deployment
of cooperating VMs in different cloud data centers. In order
to fully benefit from these new opportunities, we propose a
heuristic that outputs a matching between virtual machine and
cloud data centers by taking resource capacities, VM topologies,
performance and resource costs into account. Results of
our initial evaluation using the CloudSim Framework indicate
that, proposed heuristic is promising for a better optimized
placement of networked VM groups onto the federated cloud
topology.
Data performance characterization of frequent pattern mining algorithmsIJDKP
Big data quickly comes under the spotlight in recent years. As big data is supposed to handle extremely
huge amount of data, it is quite natural that the demand for the computational environment to accelerates,
and scales out big data applications increases. The important thing is, however, the behavior of big data
applications is not clearly defined yet. Among big data applications, this paper specifically focuses on stream mining applications. The behavior of stream mining applications varies according to the characteristics of the input data. The parameters for data characterization are, however, not clearly defined yet, and there is no study investigating explicit relationships between the input data, and streammining applications, either. Therefore, this paper picks up frequent pattern mining as one of the
representative stream mining applications, and interprets the relationships between the characteristics of the input data, and behaviors of signature algorithms for frequent pattern mining.
A Subgraph Pattern Search over Graph DatabasesIJMER
The document discusses methods for continuous subgraph pattern searching over graph databases and graph streams. It proposes using Node-Neighbor Trees (NNTs) to represent local graph structures, and projecting NNTs to numerical vectors to enable approximate subgraph isomorphism checking. It also describes how to handle uncertain graph streams by computing probability upper bounds to filter out graph stream-query pairs that are unlikely to match. The overall approach conducts structural filtering followed by probability pruning to reduce the search space when capturing patterns over uncertain graph streams.
CNN Lithology Prediction (Undergrad Thesis Jeremy Adi Padma Nagara - Universi...Jeremy Adi
This presentation will shows you that you can apply Machine Learning / Deep Learning in many fields. For this time, I use Deep Learning technique, which is Convolutional Neural Network, to tackle the problem in Geophysics Field (Gas Exploration - Oil and Gas Industry).
For the full resources, you can check it out here
https://github.com/Jeremy-Adi/CNN-Lithology-Classification
The document compares the performance of a Root Raised Cosine matched filter implemented using hybrid-logarithmic arithmetic versus standard binary and floating point arithmetic. Simulations showed that the hybrid logarithmic structure offered superior performance to fixed point solutions while having significantly reduced complexity compared to floating point equivalents. The use of hybrid logarithmic arithmetic also has the potential to reduce power consumption, latency, and hardware complexity for mobile applications.
Edge Representation Learning with HypergraphsMLAI2
Graph neural networks have recently achieved remarkable success in representing graph-structured data, with rapid progress in both the node embedding and graph pooling methods. Yet, they mostly focus on capturing information from the nodes considering their connectivity, and not much work has been done in representing the edges, which are essential components of a graph. However, for tasks such as graph reconstruction and generation, as well as graph classification tasks for which the edges are important for discrimination, accurately representing edges of a given graph is crucial to the success of the graph representation learning. To this end, we propose a novel edge representation learning framework based on Dual Hypergraph Transformation (DHT), which transforms the edges of a graph into the nodes of a hypergraph. This dual hypergraph construction allows us to apply message-passing techniques for node representations to edges. After obtaining edge representations from the hypergraphs, we then cluster or drop edges to obtain holistic graph-level edge representations. We validate our edge representation learning method with hypergraphs on diverse graph datasets for graph representation and generation performance, on which our method largely outperforms existing graph representation learning methods. Moreover, our edge representation learning and pooling method also largely outperforms state-of-theart graph pooling methods on graph classification, not only because of its accurate edge representation learning, but also due to its lossless compression of the nodes and removal of irrelevant edges for effective message-passing. Code is available at https://github.com/harryjo97/EHGNN.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Rietveld refinement is a widely used technique for determining crystal structures and quantifying crystalline materials from powder diffraction data. It works by minimizing the difference between observed and calculated diffraction patterns using least squares refinement. Key aspects include modeling the background, peak shape, unit cell parameters, atomic positions, and other structural details. Common software packages are used to perform the iterative refinement calculations.
El documento trata sobre los cristales, incluyendo su definición, cómo se forman, y las propiedades y estructuras cristalinas de varios compuestos como el sulfato de aluminio, sulfato de cobre y bórax. Explica que los cristales son la forma más altamente organizada de la materia no viviente y pueden formarse por disolución, fusión o sublimación. Además, describe los procedimientos para cultivar semillas cristalinas de estos compuestos y observar su crecimiento.
Cấu trúc nhân cách được hiểu là những yếu tố tương đối ổn định trong tâm lý con người, và cách mà các yếu tố này liên kết với nhau, tác động qua lại với nhau và chi phối hành vi của con người.
Este documento describe un experimento para cristalizar sulfato de cobre pentahidratado. Explica los materiales necesarios como sulfato de cobre en polvo y agua, y los pasos a seguir como disolver la sal en agua caliente y dejar que cristalice lentamente en un tupper. También habla de usar los cristales formados como semilla para nuevas cristalizaciones y dejar que crezcan colgando de una cuerda dentro del tupper.
Este documento describe diferentes tipos de equipos de cristalización, incluyendo cristalizadores de enfriamiento superficial, evaporación de circulación forzada, evaporador de desviador y tubo de extracción, y tubo de extracción. Explica cómo funcionan estos equipos y los factores críticos de diseño como la velocidad de circulación, el tamaño del cuerpo y el tipo de bomba utilizada. También indica que estos equipos se usan comúnmente para producir cristales granulares de 8 a 30 mallas.
Influencia del pH y la temperatura sobre el crecimiento microbianoIPN
El documento describe el efecto del pH y la temperatura sobre el crecimiento microbiano. Explica que el pH afecta procesos como la solubilidad de gases y nutrientes, y que los microorganismos tienen un pH óptimo, mínimo y máximo para su crecimiento. Asimismo, detalla cómo la temperatura influye en las reacciones bioquímicas y cómo los microorganismos se clasifican como psicrófilos, mesofílicos y termófilos/hipertermófilos dependiendo de su temperatura óptima. Finalmente, presenta los
Este documento describe los procesos de cristalización y los ambientes petrogenéticos. Explica que la materia mineral puede ser amorfa o cristalina, y que los cristales se forman por procesos como la solidificación, precipitación o sublimación. También clasifica los minerales y rocas, y describe los ambientes ígneo, metamórfico y sedimentario donde se forman.
La cristalización es un proceso físico mediante el cual se puede separar un componente de una mezcla, transformándolo en un sólido cristalino puro. Se puede obtener cristales a partir de gases, líquidos o disoluciones mediante técnicas como el enfriamiento, cambio de solvente o evaporación del disolvente. La cristalización es importante industrialmente ya que permite obtener productos químicos puros con poca energía.
Las profesoras proponen un experimento de cristalización usando azúcar y agua. La cristalización ocurre cuando se disuelve azúcar en agua caliente, creando una solución sobresaturada, y luego se enfría la solución para que el azúcar se cristalice. El documento describe los pasos del experimento, incluyendo calentar azúcar y agua hasta su punto de ebullición, enfriar la solución en un frasco, y observar la formación de cristales de azúcar a lo largo de varios
La cristalización es un proceso importante en la industria para purificar sustancias y es utilizado en química con frecuencia. Involucra la transferencia de un componente de una solución líquida a la fase sólida en forma de cristales bajo ciertas condiciones de concentración y temperatura. Tiene muchas aplicaciones industriales como la producción de sal marina y la purificación de compuestos químicos y biofarmacéuticos.
El documento describe los procesos de formación de rocas ígneas. Estas se forman cuando la magma (lava) se enfría y solidifica bajo o sobre la superficie terrestre, dando lugar a rocas intrusivas (plutónicas) bajo tierra o rocas extrusivas (volcánicas) sobre la superficie. La magma puede derivar del manto terrestre o de rocas preexistentes en cualquier capa de la Tierra.
La cristalización consiste en la formación de sólidos cristalinos a partir de disoluciones. El tamaño y perfección de los cristales depende de las condiciones de formación, como una enfriamiento lento que produce cristales más grandes e imperfectos. La cristalización es un método efectivo de purificación si se repite la disolución y cristalización varias veces.
IRJET - Object Detection using Hausdorff DistanceIRJET Journal
This document proposes using Hausdorff distance for object detection as it can better handle noise compared to other methods like Euclidean distance. The document discusses preprocessing images using Gaussian filtering for noise cancellation. It then represents shapes as point sets for feature extraction before using Hausdorff distance to match shapes between reference and test images for object recognition. Encouraging results were obtained when testing on MNIST, COIL and private handwritten digit datasets.
IRJET- Object Detection using Hausdorff DistanceIRJET Journal
This document proposes a new object recognition system using Hausdorff distance. The system aims to improve on existing methods like YOLO that struggle with small objects and can capture garbage data. The document outlines preprocessing steps like noise cancellation, representing shapes as point sets, and extracting features. It then describes using Hausdorff distance and shape context to find the best match between input and reference shapes. Testing on datasets showed encouraging results for recognizing handwritten digits.
Metabolomic Data Analysis Workshop and Tutorials (2014)Dmitry Grapov
This document provides an introduction and overview of tutorials for metabolomic data analysis. It discusses downloading required files and software. The goals of the analysis include using statistical and multivariate analyses to identify differences between sample groups and impacted biochemical domains. It also discusses various data analysis techniques including data quality assessment, univariate and multivariate statistical analyses, clustering, principal component analysis, partial least squares modeling, functional enrichment analysis, and network mapping.
A Comparative study of locality Preserving Projection & Principle Component A...RAHUL WAGAJ
The document compares the dimensionality reduction techniques of locality preserving projection (LPP) and principal component analysis (PCA) when used with logistic regression for classification. Five public datasets were used to evaluate the techniques. LPP was found to outperform PCA across all datasets and performance metrics by better preserving the local data structure, which is more important for classification than the global structure preserved by PCA. LPP achieved higher accuracy, sensitivity, specificity, precision, F-score, and area under the ROC curve than PCA for all datasets. The results indicate LPP is an effective dimensionality reduction method for classification tasks when local structure is significant.
Analysis of data science software 2020Russ Reinsch
Competitive analysis, product differentiation, nearest neighbor, topological data analysis, summary visualization, data science use cases, data access, data preparation, data exploration.
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy LogicIRJET Journal
This document discusses an improved weighted least squares (WLS) filter-based pan sharpening method using fuzzy logic. It aims to address limitations of prior work by integrating an improved principal component analysis (PCA) algorithm with fuzzy logic for image fusion. The proposed algorithm is implemented in MATLAB using image processing toolbox. Comparative analysis shows the effectiveness of the proposed algorithm based on various performance metrics. It combines useful information from multi-focus images to generate a fused image with better quality.
This project report summarizes research on using deep learning methods for super resolution of document images to improve optical character recognition (OCR) accuracy. Two approaches are studied: SRCNN and incorporating OCR accuracy into the loss function. Experiments include modifying the dataset to focus on text regions and varying the SRCNN architecture. Results show the 2-layer SRCNN trained on the modified dataset achieves the best OCR accuracy, exceeding the ground truth in some cases.
Performance Analysis of Iterative Closest Point (ICP) Algorithm using Modifie...IRJET Journal
This document discusses the Iterative Closest Point (ICP) algorithm, which is commonly used for 3D shape registration. It first provides background on ICP and describes some variants like Comprehensive ICP and Trimmed ICP. It then focuses on the Comprehensive ICP algorithm, explaining how it uses a lookup matrix to ensure unique point correspondences between shapes. Finally, it introduces the Modified Hausdorff Distance metric for evaluating similarity between registered shapes, which is more robust than other metrics. The document aims to analyze ICP variant performance using this distance metric.
Large Scale Kernel Learning using Block Coordinate DescentShaleen Kumar Gupta
This paper explores using block coordinate descent to scale kernel learning methods to large datasets. It compares exact kernel methods to two approximation techniques, Nystrom and random Fourier features, on speech, text, and image datasets. Experimental results show that Nystrom generally achieves better accuracy than random features but requires more iterations. The paper also analyzes the performance and scalability of computing kernel blocks in a distributed setting.
Automatic Target Detection using Maximum Average Correlation Height Filter an...IRJET Journal
This document presents research on using correlation filters for target detection in synthetic aperture radar (SAR) imagery. It proposes using Maximum Average Correlation Height (MACH) filters and Distance Classifier Correlation Filters (DCCF) for target classification. The filters are trained and tested on SAR images from the MSTAR database. Experimental results show that combining MACH and DCCF filters provides better classification accuracy than either filter alone. Some misclassifications occur due to errors in how the SAR images are grouped into clusters during filter training. Improving the image preprocessing and clustering could further enhance classification performance.
The document summarizes updates made to the SPPARKS software. SPPARKS is a kinetic Monte Carlo simulation program. Over the summer, the researchers implemented a new curvature diagnostic and completed the Potts gradient application. The curvature diagnostic calculates grain curvature from simulation data. The Potts gradient application models grain growth under temperature or mobility gradients. Testing showed the applications worked as intended. Documentation was also added to the SPPARKS website to describe the new functionality.
The determination of complex underlying relationships between system parameters from simulated and/or recorded data requires advanced interpolating functions, also known as surrogates. The development of surrogates for such complex relationships often requires the modeling of high dimensional and non-smooth functions using limited information. To this end, the hybrid surrogate modeling paradigm, where different surrogate models are aggregated, offers a robust solution. In this paper, we develop a new high fidelity surro- gate modeling technique that we call the Reliability Based Hybrid Functions (RBHF). The RBHF formulates a reliable Crowding Distance-Based Trust Region (CD-TR), and adap- tively combines the favorable characteristics of different surrogate models. The weight of each contributing surrogate model is determined based on the local reliability measure for that surrogate model in the pertinent trust region. Such an approach is intended to ex- ploit the advantages of each component surrogate. This approach seeks to simultaneously capture the global trend of the function and the local deviations. In this paper, the RBHF integrates four component surrogate models: (i) the Quadratic Response Surface Model (QRSM), (ii) the Radial Basis Functions (RBF), (iii) the Extended Radial Basis Functions (E-RBF), and (iv) the Kriging model. The RBHF is applied to standard test problems. Subsequent evaluations of the Root Mean Squared Error (RMSE) and the Maximum Ab- solute Error (MAE), illustrate the promising potential of this hybrid surrogate modeling approach.
Semantic Image Retrieval Using Relevance Feedback dannyijwest
This paper presents optimized interactive content-based image retrieval framework based on AdaBoost
learning method. As we know relevance feedback (RF) is online process, so we have optimized the learning
process by considering the most positive image selection on each feedback iteration. To learn the system we
have used AdaBoost. The main significances of our system are to address the small training sample and to
reduce retrieval time. Experiments are conducted on 1000 semantic colour images from Corel database to
demonstrate the effectiveness of the proposed framework. These experiments employed large image
database and combined RCWFs and DT-CWT texture descriptors to represent content of the images.
A Firefly based improved clustering algorithmIRJET Journal
This document presents a new clustering algorithm that combines k-means clustering with the firefly optimization algorithm to improve clustering performance.
The proposed algorithm first cleans the data by removing missing values and identical columns. It then uses an enhanced firefly algorithm to select optimal cluster centroids. Finally, k-means clustering is applied to assign data points to the nearest centroids.
The algorithm is tested on various sized datasets and shows improved accuracy of 78-89% and lower error rates compared to the traditional firefly algorithm alone. This demonstrates the proposed approach can perform clustering more efficiently and accurately.
This document discusses online analytical processing (OLAP) for business intelligence using a 3D architecture. It proposes the Next Generation Greedy Dynamic Mix based OLAP algorithm (NGGDM-OLAP) which uses a mix of greedy and dynamic approaches for efficient data cube modeling and multidimensional query results. The algorithm constructs execution plans in a top-down manner by identifying the most beneficial view at each step. The document also describes OLAP system architecture, multidimensional data modeling, different OLAP analysis models, and concludes that integrating OLAP and data mining tools can benefit both areas.
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET Journal
This document compares the performance of four face recognition algorithms - PCA, KPCA, KFA, and LDA - on three standard datasets: AT&T, Yale, and UMIST. It finds that KFA generally achieves the highest recognition rates, particularly for the AT&T and Yale datasets which involve changes in facial expressions and lighting. The Yale dataset, with its variations, yields the best results overall for KFA and LDA. The UMIST dataset, with its profile images, produces lower recognition rates across algorithms due to less similarity between training and test images.
This webinar will provide pesticides residue analysts with valuable information on software method development and data processing for the analysis of pesticide residues in food for both LC–MS and GC–MS. Technical experts will review the latest in software advances to help with data interpretation and reporting.
Similar to Pa nalyticals high_score_suite_brochure (20)
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
2. The HighScore suite
In a nutshell, what’s new
in the HighScore suite 4.5?
• Instant candidates match - a new way of
single point analysis
• Improved graphical representation & user-
friendliness for phase identification
• Faster fitting of large data sets by using
multi-cores
• Best space group determination algorithm
on the market
• Generalization of the Finger-Cox-Jepthcoat
asymmetry for all profile functions
• PDF calculation from observed XRD scans
The most comprehensive powder
diffraction software
The HighScore suite contains two modules: HighScore and the Plus option. While HighScore is a comprehensive phase
identification program, the Plus option has the additional functionalities of profile fitting, Rietveld, crystallographic and
extended cluster analysis. The HighScore suite is designed with flexibility in mind. Whether you are pushing the techniques
to their limits or establishing a regular and routine assessment - the HighScore suite will always suit your requirements.
Thanks to the combined efforts of powder diffraction
communities worldwide, databases of powder patterns are
growing rapidly, extending the scope of powder pattern
search-match analyses. Also new statistical methods such
as PLSR are emerging, that provide a rapid and targeted
analysis for the reproducibility and quality control of
materials.
HighScore comes with a comprehensive help file, support
material and access to education and training. With local
applications specialists on hand to answer your questions
you are never far from help.
Consult our latest paper for a description of the new
methodologies incorporated in the software:
Powder Diffraction, Volume 29 / Supplement S2 / December
2014, pp S13-S18
3. HighScore
Full-pattern approach for phase
identification and much more
Determination of the crystalline
components in your material:
phase identification
Instant candidate match
A new way of single point analysis suggests candidates just by
pointing at a data point
Search-match
Powerful search-match algorithm that combines peak and profile
data and instantly re-scores an existing candidate list
Automatic identification
Best matches for candidates can be automatically accepted using a
sophisticated filter.
Chemistry calculator
The chemistry calculator breaks down phase chemistry into
simple elements, oxides, sulfides or other compounds. It can be
used for single phases or for phase mixtures with known phase
concentrations.
Reference databases
All reference databases are supported, including those created by the
user.
Figure 1. Selection example of chemical elements from the
periodic table
Figure 2. Phase identification of a
minerals mixture. All phases are
color-coded and their peaks are
displayed with their corresponding
markers for improved readability.
The semi quantification is done
with reference intensity ratios
Position [°2θ] (Copper (Cu))
30 40 50 60 70
Counts
2500
10000
Calcite 22 %
Eskolaite 18 %
Fluorite 59 %
4. Figure 4. Fitted profile and peak list with FWHM and integral breadth
(top part) and Gaussian and Lorentzian broadening contributions with
Williamson-Hall plot (bottom part)
Investigating the microstructure: strain & size analysis
Information on the microstructure of crystalline materials is obtained
from the width and the shape of X-ray single peak profiles. Before the
analysis, HighScore can correct for the instrumental contribution of the
line broadening.
The results are microstrain and/or crystallite size information for each
peak. For multiple peaks a Williamson-Hall plot can be shown with the
average values [2].
Figure 5.
a) Reduced structure
function of Zn(CN)2
measured at room
temperature and b) its
corresponding atomic
pair distribution
function calculated
within HighScore
Getting insights into disorder and local structure:
pair distribution function
• Derivation of the reduced structure function
and the corresponding atomic pair distribution
• Correction and normalization of pair
distribution function (PDF) data is made easy.
With a few clicks, you can correct for:
- Absorption
- Bremsstrahlung
- Compton and multiple scattering
- Lorentz polarization
a) b)
Discovering hidden information or correlations:
partial least squares regression method
Figure 3. Comparison between wet chemistry and the
PLSR method for the determination of the Fe2+ content
in a series of mineral samples [1]
The partial least squares regression (PLSR) method in HighScore:
• User-friendly
• Truly statistical approach that compares data to real-life calibrations
and does not require the lengthy simulation and fitting of a sample
model.
• Rapid and direct correlation of measured data to the sample of
interest
As illustrated in Figure 3, the PLSR method offers considerable time
savings over wet chemistry and is just as reliable.
0 4 8 12 16 20 24 28 32 36
4.2
4.4
4.6
4.8
5.0
5.2
5.4
5.6
PLSR
Wet chemistry
Fe2+
content(%)
Sample number
5. Deconvoluting overlapping reflections: profile fit
Figure 7.
Profile fitting of a lysozyme
microcrystalline sample
For an improved determination of the peaks parameters,
profile fitting allows a deconvolution of severely overlapping
reflections.
• Improved extractable
parameters:
- Position
- Intensity
- Width
- Shape
• Useful information for:
- Crystallite size
- Microstrain
Figure 8. Original graphic of the X-ray diffraction pattern of a
rust sample published in Powder Diffraction 1, 299 (1986) [3]
Figure 9. Rietveld refinement on the converted scan
Digitalizing powder diffraction pattern: bitmap-to-scan converter
Handling big data: cluster analysis
Figure 6. Cluster analysis of fly
ash raw materials coming from
different sources
Modern X-ray diffraction equipment allows rapid measurements
resulting in large amounts of data to be analyzed. The best way to
tackle the data evaluation relies on the possibility to identify and group
similar data sets, and identify the most representative data sets while
pointing out outliers.
The cluster analysis tool implemented in HighScore makes this analysis
smooth and easy.
6. Figure 11. Quantification of a slag cement sample using multi-phase model
fitting
The ideal tool for crystallographic
analysis and more
For structural analysis and quantification:
Rietveld and PONKCS methods
By adding the Plus option to HighScore, you will have a true all-in-one package including cluster analysis, PLSR, Rietveld
analysis, phase identification, and many other tools integrated in a user-friendly environment.
HighScore
and the Plus option
The Rietveld method is a full-pattern fitting method in
which a measured diffraction profile and a calculated profile
are compared and, by varying a range of parameters, the
difference between the two profiles is minimized (see
Figure 10). A standard Rietveld refinement requires atomic
positions, space group and cell parameters.
PANalytical’s Rietveld algorithm is an advanced
implementation of widely accepted and proven technology,
continuously developed over the past decades.
The fitting of data using the Rietveld kernel has significantly
been improved by employing:
• improved asymmetric peak functions,
• proper description of the Kα contribution,
• an improved model for preferred orientation with the use
of spherical harmonics.
For the quantification of a phase with an unknown crystal
structure, the PONKCS method is the solution [4] (Partial
Or Not Known Crystal Structure) and it is as efficient as the
Rietveld method.
The fitting kernel implemented in HighScore and the Plus
option allows for quantification of any phase, either via the
PONKCS method alone or in combination with the Rietveld
method as illustrated here with a slag cement (see Figure
11). Additional fitting procedures (Pawley, Le Bail, individual
peaks, etc.) can be used if required.
Figure 10. Rietveld refinement with HighScore Plus of Fe(IO3)3 measured
with Mo Kα radiation.
7. Figure 13. Parametric measurement of RbMnPO4 as
function of temperature. Data were treated using
a batch to carry out Le Bail fit and exporting the
refined parameters (volume, cell parameters with
error bars) as function of temperature
Figure 12. After carrying out a Le Bail or Pawley fit, with a few clicks, the possible space groups can
be determined using the most advanced algorithm ExtSym [5] .
New crystalline phases: indexation and space group determination
Speeding up your data processing: automatic data treatment
The most popular and powerful
indexing programs are incorporated in
HighScore and the Plus option:
• Dicvol
• Treor
• ITO
• McMaille
Carrying out a parametric (time, temperature, composition,
etc.) experiment easily yields a large amount of data sets
to process. With HighScore and the Plus option, batches
can be created for any type of automatic data processing:
Rietveld analysis, Le Bail or Pawley fit, etc. The output can
be exported in an ascii format and further treated with any
software. An illustration of such data treatment is shown in
Figure 13.