This document describes using a Generalized Regression Neural Network (GRNN) to identify the skeleton types of monoterpenoid compounds from their 13C NMR chemical shift data. 13C NMR data from 328 training compounds belonging to 8 monoterpenoid skeleton classes (Myrcane, Santoline, Menthane, Thujane, Bornane, Isocamphane, Pinane, Fenchane) were used to train the GRNN. 113 test compounds were then used to evaluate the trained network. At a spread constant of 15, the network accurately identified the Myrcane, Santoline and Menthane skeletons but struggled with the Bornane and Pinane skeletons. Increasing the training data for those two
Protein structure determination from hybrid NMR data.Mark Berjanskii
Protein structure determination from hybrid NMR data. Presentation is related to: biochemistry, bioinformatics, biology, biophysics, mark berjanskii, molecular biology, molecular dynamics, molecular modeling, nmr spectroscopy, protein nmr, public speaking, python programming, sparse data, structural biology, structure determination, teaching, web design, web development, web programming, Wishart group, hybrid data, SAXS, WAXS, X-ray crystallography, FRET, CryoEM, EPR, Mass Spectrometry
1) The document discusses various methods for determining the 3D structure of proteins, including x-ray crystallography, NMR spectroscopy, and cryo-electron microscopy.
2) X-ray crystallography involves purifying the protein, crystallizing it, collecting diffraction data from x-rays hitting the crystal, using this data to determine phases and calculate an electron density map, and building an atomic model through refinement.
3) NMR spectroscopy involves dissolving the purified protein and using nuclear magnetic resonance to measure distances between atomic nuclei, allowing the structure to be calculated.
The Assembly, Structure and Activation of Influenza a M2 Transmembrane Domain...Haley D. Norman
This document summarizes two research papers on computational methods for analyzing protein structures and interactions. The first paper describes a Bayesian method for determining protein structures from sparse single-molecule X-ray diffraction data. The second paper presents xMDFF, a new molecular dynamics flexible fitting approach for refining low-resolution protein structures determined by X-ray crystallography. The third paper introduces i-ATTRACT, a new flexible protein-protein docking method that combines rigid body and flexible interface residue energy minimization for predicting protein complex structures.
PROTEIN STRUCTURE PREDICTION USING SUPPORT VECTOR MACHINEijsc
Support Vector Machine (SVM) is used for predict the protein structural. Bioinformatics method use to protein structure prediction mostly depends on the amino acid sequence. In this paper, work predicted of 1-
D, 2-D, and 3-D protein structure prediction. Protein structure prediction is one of the most important problems in modern computation biology. Support Vector Machine haves shown strong generalization ability protein structure prediction. Binary classification techniques of Support Vector Machine are implemented and RBF kernel function is used in SVM. This Radial Basic Function (RBF) of SVM produces better accuracy in terms of classification and the learning results.
Peptide Mass Fingerprinting (PMF) and Isotope Coded Affinity Tags (ICAT)Suresh Antre
Analytical technique for identifying unknown protein. The peptide mass are compared to database containing the theoretical peptide masses of all known protein sequences.
This document discusses de novo protein structure prediction, which predicts protein structure from amino acid sequence alone without using existing protein templates. It notes the need for ab initio prediction when no homologous structures exist. Successful de novo prediction requires an accurate energy function to identify native structures, an efficient conformational search method, and ability to select native models. Results from ab initio prediction typically have 5-10 Angstrom accuracy. Domain prediction is important to divide large proteins into independently folding domains for prediction. Advantages include automation and ability to structurally annotate genomes. Challenges include the vast conformational search space and need for accurate energy functions.
Protein structure determination from hybrid NMR data.Mark Berjanskii
Protein structure determination from hybrid NMR data. Presentation is related to: biochemistry, bioinformatics, biology, biophysics, mark berjanskii, molecular biology, molecular dynamics, molecular modeling, nmr spectroscopy, protein nmr, public speaking, python programming, sparse data, structural biology, structure determination, teaching, web design, web development, web programming, Wishart group, hybrid data, SAXS, WAXS, X-ray crystallography, FRET, CryoEM, EPR, Mass Spectrometry
1) The document discusses various methods for determining the 3D structure of proteins, including x-ray crystallography, NMR spectroscopy, and cryo-electron microscopy.
2) X-ray crystallography involves purifying the protein, crystallizing it, collecting diffraction data from x-rays hitting the crystal, using this data to determine phases and calculate an electron density map, and building an atomic model through refinement.
3) NMR spectroscopy involves dissolving the purified protein and using nuclear magnetic resonance to measure distances between atomic nuclei, allowing the structure to be calculated.
The Assembly, Structure and Activation of Influenza a M2 Transmembrane Domain...Haley D. Norman
This document summarizes two research papers on computational methods for analyzing protein structures and interactions. The first paper describes a Bayesian method for determining protein structures from sparse single-molecule X-ray diffraction data. The second paper presents xMDFF, a new molecular dynamics flexible fitting approach for refining low-resolution protein structures determined by X-ray crystallography. The third paper introduces i-ATTRACT, a new flexible protein-protein docking method that combines rigid body and flexible interface residue energy minimization for predicting protein complex structures.
PROTEIN STRUCTURE PREDICTION USING SUPPORT VECTOR MACHINEijsc
Support Vector Machine (SVM) is used for predict the protein structural. Bioinformatics method use to protein structure prediction mostly depends on the amino acid sequence. In this paper, work predicted of 1-
D, 2-D, and 3-D protein structure prediction. Protein structure prediction is one of the most important problems in modern computation biology. Support Vector Machine haves shown strong generalization ability protein structure prediction. Binary classification techniques of Support Vector Machine are implemented and RBF kernel function is used in SVM. This Radial Basic Function (RBF) of SVM produces better accuracy in terms of classification and the learning results.
Peptide Mass Fingerprinting (PMF) and Isotope Coded Affinity Tags (ICAT)Suresh Antre
Analytical technique for identifying unknown protein. The peptide mass are compared to database containing the theoretical peptide masses of all known protein sequences.
This document discusses de novo protein structure prediction, which predicts protein structure from amino acid sequence alone without using existing protein templates. It notes the need for ab initio prediction when no homologous structures exist. Successful de novo prediction requires an accurate energy function to identify native structures, an efficient conformational search method, and ability to select native models. Results from ab initio prediction typically have 5-10 Angstrom accuracy. Domain prediction is important to divide large proteins into independently folding domains for prediction. Advantages include automation and ability to structurally annotate genomes. Challenges include the vast conformational search space and need for accurate energy functions.
Gel Based Proteomics and Protein Sequences AnalysisGelica F
Two-dimensional gel electrophoresis (2DE) is the standard method for quantitative proteome analysis. It combines protein separation based on isoelectric focusing and molecular weight. In the first dimension, proteins are separated based on their isoelectric point using immobilized pH gradients. In the second dimension, proteins are separated by molecular weight using SDS-PAGE. The separated protein spots are then analyzed using mass spectrometry to identify individual proteins. 2DE provides high resolution and the ability to analyze thousands of proteins simultaneously, but it also has limitations including irreproducibility and inability to resolve all proteins.
Data Integration, Mass Spectrometry Proteomics Software DevelopmentNeil Swainston
This document discusses quantitative proteomics and integrating proteomics data into kinetic modeling in systems biology. It describes using isotopically labeled peptides to quantify proteins simultaneously via mass spectrometry. A method called QconCAT is outlined which uses an artificial protein containing multiple labeled peptides to reference. The document then describes an informatics pipeline to analyze such quantitative proteomics data, identifying peptides, determining concentrations, uploading data to repositories, and linking it with modeling databases to incorporate experimental data into systems biology simulations.
This presentation discusses protein structure prediction using Rosetta. It begins with an overview of the Critical Assessment of Protein Structure Prediction (CASP) experiments and notes that Rosetta is one of the top performing free-modeling servers. The presentation then describes the basic ab initio protocol used by Rosetta, which involves fragment insertion, scoring, and refinement. It also discusses limitations and success rates. Key aspects of the Rosetta energy functions and sampling algorithms are presented. Examples of specific Rosetta applications including low-resolution modeling and refinement are provided.
Theoretical evaluation of shotgun proteomic analysis strategies; Peptide obse...Keiji Takamoto
This document discusses evaluating different strategies for shotgun proteomic analysis through theoretical modeling. It develops a peptide observability function based on mouse proteomic data to predict how observable peptides are by LC-MS/MS. This function is applied to theoretically digested mouse proteins using different proteases and separation techniques to evaluate their combinations and the separation profiles achieved. The results suggest SAX/trypsin and IEF/trypsin are favorable combinations that provide good separation.
Proteomics uses techniques from molecular biology, biochemistry, and genetics to analyze proteins produced by genes. Mass spectrometry is commonly used in proteomics to identify proteins. Techniques like isotope-coded affinity tags (ICAT) allow comparative analysis of protein expression between samples by labeling proteins with stable isotopes before mass spectrometry analysis. ICAT involves labeling cysteine-containing peptides from two samples with either light or heavy isotopic reagents, mixing the samples, then using mass spectrometry to quantify differences in protein expression between the original samples based on mass shifts between labeled peptides.
Methods of Protein structure determination EL Sayed Sabry
This document summarizes several methods for determining protein structure: X-ray crystallography, nuclear magnetic resonance spectroscopy, and cryo-electron microscopy. X-ray crystallography involves growing protein crystals, exposing them to X-rays to generate diffraction patterns, and using the patterns to build 3D electron density maps of the protein. Nuclear magnetic resonance spectroscopy measures distances between atomic nuclei in soluble proteins by analyzing spectra from radiofrequency pulses applied in strong magnetic fields. Cryo-electron microscopy images frozen, hydrated protein samples with an electron microscope to determine large protein structures without the need for crystallization.
This document summarizes an ab initio study of the denaturation of the Small Ubiquitin-like Modifier (SUMO) protein using molecular dynamics simulations and NMR calculations. The study found that after denaturing, some residues in SUMO still showed propensities to form secondary structure rather than becoming fully random coils. Molecular dynamics simulations of different SUMO topologies under denaturing conditions were performed. NMR properties were then calculated and compared to experimental observations, showing some residues maintained beta sheet or alpha helical propensities when denatured. This suggests denatured proteins can become trapped in local energy minima rather than fully unfolding.
Bioinformatics emerged from the marriage of computer science and molecular biology to analyze massive amounts of biological data, like that produced by the Human Genome Project. It uses algorithms and techniques from computer science to solve problems in molecular biology, like comparing genomic sequences to understand evolution. As genomic data exploded publicly, bioinformatics was needed to efficiently store, analyze, and make sense of this information, which has applications in molecular medicine, drug development, agriculture, and more.
2D gel electrophoresis is a powerful technique that separates proteins based on two properties - their isoelectric point and molecular weight. In the first dimension, isoelectric focusing separates proteins based on isoelectric point, while SDS-PAGE in the second dimension separates them based on molecular weight. Each spot on the 2D gel corresponds to a single protein that can then be analyzed by mass spectrometry. This technique allows for the high-throughput separation and analysis of thousands of proteins from biological samples.
The document discusses proteomics, which is the study of the entire complement of proteins in a cell or organism. It defines key proteomics terms like proteome and describes techniques used in proteomics like protein separation, 2D gel electrophoresis, mass spectrometry, and protein digestion. The goals of proteomics include detecting and comparing protein expression profiles to understand biological processes and discover drug targets. Proteomics provides important insights not available through genomics alone.
The document discusses metabolomics data analysis and issues for biostatistics. It describes the metabolomics pipeline from experimental design and data acquisition to statistical analysis and biological interpretation. Key aspects covered include data preprocessing methods, exploratory and supervised multivariate analysis, and biological interpretation tools like metabolic network inference and pathway analysis. Specific statistical challenges in metabolomics like handling non-detects and exploring variable importance are also addressed.
This document discusses protein extraction and fractionation techniques used in food proteomics. It begins by outlining various methods for disrupting plant cell walls including mechanical, ultrasonic, pressure and temperature-based techniques. It then describes approaches for solubilizing and precipitating proteins from foods using organic solvents and aqueous solutions. Key steps in a typical proteomics workflow are outlined including protein extraction, separation, identification and data analysis. The challenges of analyzing complex food proteomes due to heterogeneity and abundance differences are also noted. Finally, an integrated view of various extraction and fractionation methods employed in food proteomics is presented.
Techniques used for separation in proteomicsNilesh Chandra
Proteomics aims to characterize the complete set of proteins in a biological system. It faces challenges due to sample complexity and wide protein concentration ranges. Common separation techniques include 2D electrophoresis, 2D-DIGE, ICAT, SILAC, iTRAQ, MudPIT, and protein microarrays. Mass spectrometry is central to protein identification. Data analysis is challenging due to the large datasets and lack of standardization. Effective proteomics requires optimized multi-step workflows combining separation, labeling, mass spectrometry, and bioinformatics.
This document provides information on various computational tools and methods for protein identification, characterization, and structure prediction. It discusses tools that use amino acid composition, sequence alignment, peptide mass fingerprinting, and physico-chemical properties to identify proteins. It also describes methods such as Chou-Fasman, GOR, and neural networks that predict protein secondary structure and properties based on amino acid order, propensities, and probabilities.
Kalistratova L., Kireev A. Ordering and density amorphous phase of carbon-fib...Елена Овечкина
We studied the application of a mathematical model for calculating the X-ray density of a pure amorphous–crystalline polymer, taking into account the degree of ordering, crystallinity and deformation of a crystalline cell in the amorphous phase, to carbon–fiber PTFE–composites. By comparing the theoretical densities (based on the developed software) of PTFE + CF composites system with the results of experimental densities and X-ray structural parameters, it was shown that the degree of ordering and density of the amorphous phase of the polymer–matrix PTFE–composites linearly decrease as the content of the carbon–fiber increases. The change in the degree of ordering of the amorphous phase when filler application can be considered as one of the mechanisms of formation of the supramolecular structure of composite materials on the basis of amorphous–crystalline polymers.
This document describes a proposed approach called Resource Allocation with Connection Admission Control (RA-CAC) and Adaptive Rate Scheduling (ARS) to improve quality of service for real-time traffic in WCDMA networks. The RA-CAC algorithm determines the optimal number of users to admit while minimizing call rejection rates. The ARS then adjusts transmission rates of admitted sessions based on feedback to better utilize network resources. Simulation results showed this approach increased delivery ratio, throughput and reduced delays compared to other resource allocation methods.
Unified V- Model Approach of Re-Engineering to reinforce Web Application Deve...IOSR Journals
The document discusses approaches for reengineering web applications. It proposes using a unified V-model approach to reinforce web application development through reengineering. Specifically, it discusses:
1) Using reverse engineering to analyze existing web applications and recover designs, followed by forward engineering to restructure the applications based on new requirements.
2) Applying the V-model at each phase of the web development process during reengineering to incorporate methodology.
3) The reengineering process involves reverse engineering, transformations to adapt to new technologies/requirements, and forward engineering to implement the new design.
Gel Based Proteomics and Protein Sequences AnalysisGelica F
Two-dimensional gel electrophoresis (2DE) is the standard method for quantitative proteome analysis. It combines protein separation based on isoelectric focusing and molecular weight. In the first dimension, proteins are separated based on their isoelectric point using immobilized pH gradients. In the second dimension, proteins are separated by molecular weight using SDS-PAGE. The separated protein spots are then analyzed using mass spectrometry to identify individual proteins. 2DE provides high resolution and the ability to analyze thousands of proteins simultaneously, but it also has limitations including irreproducibility and inability to resolve all proteins.
Data Integration, Mass Spectrometry Proteomics Software DevelopmentNeil Swainston
This document discusses quantitative proteomics and integrating proteomics data into kinetic modeling in systems biology. It describes using isotopically labeled peptides to quantify proteins simultaneously via mass spectrometry. A method called QconCAT is outlined which uses an artificial protein containing multiple labeled peptides to reference. The document then describes an informatics pipeline to analyze such quantitative proteomics data, identifying peptides, determining concentrations, uploading data to repositories, and linking it with modeling databases to incorporate experimental data into systems biology simulations.
This presentation discusses protein structure prediction using Rosetta. It begins with an overview of the Critical Assessment of Protein Structure Prediction (CASP) experiments and notes that Rosetta is one of the top performing free-modeling servers. The presentation then describes the basic ab initio protocol used by Rosetta, which involves fragment insertion, scoring, and refinement. It also discusses limitations and success rates. Key aspects of the Rosetta energy functions and sampling algorithms are presented. Examples of specific Rosetta applications including low-resolution modeling and refinement are provided.
Theoretical evaluation of shotgun proteomic analysis strategies; Peptide obse...Keiji Takamoto
This document discusses evaluating different strategies for shotgun proteomic analysis through theoretical modeling. It develops a peptide observability function based on mouse proteomic data to predict how observable peptides are by LC-MS/MS. This function is applied to theoretically digested mouse proteins using different proteases and separation techniques to evaluate their combinations and the separation profiles achieved. The results suggest SAX/trypsin and IEF/trypsin are favorable combinations that provide good separation.
Proteomics uses techniques from molecular biology, biochemistry, and genetics to analyze proteins produced by genes. Mass spectrometry is commonly used in proteomics to identify proteins. Techniques like isotope-coded affinity tags (ICAT) allow comparative analysis of protein expression between samples by labeling proteins with stable isotopes before mass spectrometry analysis. ICAT involves labeling cysteine-containing peptides from two samples with either light or heavy isotopic reagents, mixing the samples, then using mass spectrometry to quantify differences in protein expression between the original samples based on mass shifts between labeled peptides.
Methods of Protein structure determination EL Sayed Sabry
This document summarizes several methods for determining protein structure: X-ray crystallography, nuclear magnetic resonance spectroscopy, and cryo-electron microscopy. X-ray crystallography involves growing protein crystals, exposing them to X-rays to generate diffraction patterns, and using the patterns to build 3D electron density maps of the protein. Nuclear magnetic resonance spectroscopy measures distances between atomic nuclei in soluble proteins by analyzing spectra from radiofrequency pulses applied in strong magnetic fields. Cryo-electron microscopy images frozen, hydrated protein samples with an electron microscope to determine large protein structures without the need for crystallization.
This document summarizes an ab initio study of the denaturation of the Small Ubiquitin-like Modifier (SUMO) protein using molecular dynamics simulations and NMR calculations. The study found that after denaturing, some residues in SUMO still showed propensities to form secondary structure rather than becoming fully random coils. Molecular dynamics simulations of different SUMO topologies under denaturing conditions were performed. NMR properties were then calculated and compared to experimental observations, showing some residues maintained beta sheet or alpha helical propensities when denatured. This suggests denatured proteins can become trapped in local energy minima rather than fully unfolding.
Bioinformatics emerged from the marriage of computer science and molecular biology to analyze massive amounts of biological data, like that produced by the Human Genome Project. It uses algorithms and techniques from computer science to solve problems in molecular biology, like comparing genomic sequences to understand evolution. As genomic data exploded publicly, bioinformatics was needed to efficiently store, analyze, and make sense of this information, which has applications in molecular medicine, drug development, agriculture, and more.
2D gel electrophoresis is a powerful technique that separates proteins based on two properties - their isoelectric point and molecular weight. In the first dimension, isoelectric focusing separates proteins based on isoelectric point, while SDS-PAGE in the second dimension separates them based on molecular weight. Each spot on the 2D gel corresponds to a single protein that can then be analyzed by mass spectrometry. This technique allows for the high-throughput separation and analysis of thousands of proteins from biological samples.
The document discusses proteomics, which is the study of the entire complement of proteins in a cell or organism. It defines key proteomics terms like proteome and describes techniques used in proteomics like protein separation, 2D gel electrophoresis, mass spectrometry, and protein digestion. The goals of proteomics include detecting and comparing protein expression profiles to understand biological processes and discover drug targets. Proteomics provides important insights not available through genomics alone.
The document discusses metabolomics data analysis and issues for biostatistics. It describes the metabolomics pipeline from experimental design and data acquisition to statistical analysis and biological interpretation. Key aspects covered include data preprocessing methods, exploratory and supervised multivariate analysis, and biological interpretation tools like metabolic network inference and pathway analysis. Specific statistical challenges in metabolomics like handling non-detects and exploring variable importance are also addressed.
This document discusses protein extraction and fractionation techniques used in food proteomics. It begins by outlining various methods for disrupting plant cell walls including mechanical, ultrasonic, pressure and temperature-based techniques. It then describes approaches for solubilizing and precipitating proteins from foods using organic solvents and aqueous solutions. Key steps in a typical proteomics workflow are outlined including protein extraction, separation, identification and data analysis. The challenges of analyzing complex food proteomes due to heterogeneity and abundance differences are also noted. Finally, an integrated view of various extraction and fractionation methods employed in food proteomics is presented.
Techniques used for separation in proteomicsNilesh Chandra
Proteomics aims to characterize the complete set of proteins in a biological system. It faces challenges due to sample complexity and wide protein concentration ranges. Common separation techniques include 2D electrophoresis, 2D-DIGE, ICAT, SILAC, iTRAQ, MudPIT, and protein microarrays. Mass spectrometry is central to protein identification. Data analysis is challenging due to the large datasets and lack of standardization. Effective proteomics requires optimized multi-step workflows combining separation, labeling, mass spectrometry, and bioinformatics.
This document provides information on various computational tools and methods for protein identification, characterization, and structure prediction. It discusses tools that use amino acid composition, sequence alignment, peptide mass fingerprinting, and physico-chemical properties to identify proteins. It also describes methods such as Chou-Fasman, GOR, and neural networks that predict protein secondary structure and properties based on amino acid order, propensities, and probabilities.
Kalistratova L., Kireev A. Ordering and density amorphous phase of carbon-fib...Елена Овечкина
We studied the application of a mathematical model for calculating the X-ray density of a pure amorphous–crystalline polymer, taking into account the degree of ordering, crystallinity and deformation of a crystalline cell in the amorphous phase, to carbon–fiber PTFE–composites. By comparing the theoretical densities (based on the developed software) of PTFE + CF composites system with the results of experimental densities and X-ray structural parameters, it was shown that the degree of ordering and density of the amorphous phase of the polymer–matrix PTFE–composites linearly decrease as the content of the carbon–fiber increases. The change in the degree of ordering of the amorphous phase when filler application can be considered as one of the mechanisms of formation of the supramolecular structure of composite materials on the basis of amorphous–crystalline polymers.
This document describes a proposed approach called Resource Allocation with Connection Admission Control (RA-CAC) and Adaptive Rate Scheduling (ARS) to improve quality of service for real-time traffic in WCDMA networks. The RA-CAC algorithm determines the optimal number of users to admit while minimizing call rejection rates. The ARS then adjusts transmission rates of admitted sessions based on feedback to better utilize network resources. Simulation results showed this approach increased delivery ratio, throughput and reduced delays compared to other resource allocation methods.
Unified V- Model Approach of Re-Engineering to reinforce Web Application Deve...IOSR Journals
The document discusses approaches for reengineering web applications. It proposes using a unified V-model approach to reinforce web application development through reengineering. Specifically, it discusses:
1) Using reverse engineering to analyze existing web applications and recover designs, followed by forward engineering to restructure the applications based on new requirements.
2) Applying the V-model at each phase of the web development process during reengineering to incorporate methodology.
3) The reengineering process involves reverse engineering, transformations to adapt to new technologies/requirements, and forward engineering to implement the new design.
I- Function and H -Function Associated With Double IntegralIOSR Journals
The object of this paper is to discuss certain integral properties of a I -function and H -function,
proposed by Inayat-Hussain which contain a certain class of Feynman integrals, the exact partition of a Gaussian
model in Statistical Mechanics and several other functions as its particular cases. During the course of finding,
we establish certain new double integral relation pertaining to a product involving I function and H function.
These double integral relations are unified in nature and act as a key formulae from which we can obtain as their
special case, double integral relations concerning a large number of simple special functions. For the sake of
illustration, we record here some special cases of our main results which are also new and of interest by
themselves. All the result which are established in this paper are basic in nature and are likely to find useful
applications in several fields notably electrical network, probability theory and statistical mechanics.
The Analysis of Selected Physico-Chemical Parameters of Water (A Case Study o...IOSR Journals
This document analyzes selected physico-chemical parameters of water from the Isu and Calabar rivers in Ebonyi State, Nigeria. Water samples were collected from various points along the rivers and tested for parameters like pH, turbidity, conductivity, alkalinity, total solids, chlorides, sulfates, nitrates, phosphates, and heavy metals. The results were then compared to World Health Organization drinking water standards. Most parameters met WHO standards, but some exceeded them - turbidity in Isu river, chromium downstream of Isu river, lead and cadmium in Calabar river, and arsenic in both rivers. The study aims to evaluate water quality in these rivers given their importance for drinking,
Optimization Technologies for Low-Bandwidth NetworksIOSR Journals
This document summarizes optimization techniques for low-bandwidth networks. It discusses how bandwidth, throughput, latency and speed impact internet connections. It then outlines a case study of the Sudanese Universities Information Network (SUIN) which connected universities with low-speed links. The document proposes using network monitoring, implementing policies to define acceptable usage, and technical solutions like caching and filtering to optimize limited bandwidth. User education on bandwidth-friendly practices is also recommended to improve network performance.
Android Management Redefined: An Enterprise PerspectiveIOSR Journals
This document discusses how enterprises can better manage Android devices. It begins by outlining the business requirements and challenges faced by enterprises in adopting Android, such as needing fine-grained policy management and secure user/data access. It then describes how to plan a custom Android solution through understanding requirements, choosing appropriate devices, preparing devices with necessary policies and customizations. A key recommendation is choosing purpose-built devices only for large-scale deployments with specific hardware needs, otherwise using consumer devices with accessories is more cost-effective.
The document discusses using Six Sigma methodology to identify the root causes of lining thickness variation defects in brake shoes during production. Six Sigma is a quality improvement process used to reduce defects by minimizing variation and improving manufacturing processes. The company was experiencing high rejection rates due to lining thickness variation defects, resulting in increased rework and scrap costs. The author aims to apply the Define, Measure, Analyze, Improve, and Control phases of Six Sigma to identify the root causes of the defects and reduce rejection levels. Data on production volumes and defects over the last six months was collected and analyzed. An Ishikawa diagram was created to identify potential causes of the defects related to materials, machines, methods, measurements and personnel.
Measurement of Efficiency Level in Nigerian Seaport after Reform Policy Imple...IOSR Journals
This paper focuses on the impact of reforms on port performance using Onne and Rivers ports as a reference point. It analyses the pre and post reform eras of the ports in terms of their performance. The reforms took effect from 1996 after the Federal Government of Nigeria concessioned the ports to private investors. Parameters such as Ship traffic, Cargo throughput, Ship turn round time, Berth Occupancy and personnel were used as variables for the assessment. Secondary Data were collected from the Nigerian Ports Authority and Integrated Logistic Services Nigeria (Intels) for the period 2001 to 2010 and analyzed using Data Envelopment Analysis to assess the efficiency of the port. Analysis revealed a continuous improvement in the overall efficiency of both Ports Since 2006 when the new measure was introduced. Average Ship turn-around time improved in the ports due to modern and fast cargo handling equipment and more cargo handling space which were provided. There is an increase in Ship traffic calling at the ports, resulting in increased cargo throughput and berth occupancy rate at ports of Onne and Rivers. The reform also led to more private investment in the ports’ existing and new facilities and the introduction of a World Class service in port operation. This study concludes that the Ports of Onne and Rivers are performing better under the reform programme of the Federal Government of Nigeria. It finally recommends the urgent need for a regulator to appraise the performance of the reform programme from time to time as provided by the agreement and for the full adoption and utilization of management information system (MIS) to aid performance efficiency.
This document discusses the application of smart energy meters in the Indian energy context. It begins with an introduction to the increasing demand for electricity in India and issues like energy theft and inaccurate metering. It then discusses how smart meters can address these issues through automated meter reading and two-way communication. The key components and functioning of a smart metering system are explained, including the microcontroller program, real-time clock, communication port, and software. Finally, the document provides a case study where a smart meter is installed in a residential building to monitor parameters like voltage, current and power factor over a period of time.
This document discusses quality assurance in technical vocational education (TVE) for sustainable national development in the 21st century. It defines TVE and outlines its importance for providing skilled workers and empowering youth. The status of TVE in Nigeria is examined, noting issues like inadequate funding, resources, and the perception of TVE. Quality assurance is defined as measures to ensure TVE achieves its goals. Sustainable development and the role of TVE in enabling it are also discussed. The document concludes with recommendations like increasing government funding for TVE to improve its quality and contribution to Nigeria's sustainable development.
This document provides a review of sentiment mining and related classifiers. It begins with an introduction to data mining and web mining. It then discusses related work on applying techniques like content, descriptive and network analytics to tweets to gain supply chain insights. The document also covers the basic workflow of opinion mining including preprocessing, feature extraction and selection, and feature weighting. It compares classifiers like Naive Bayes, decision trees, k-nearest neighbor, and support vector machines. Finally, it discusses applications of sentiment analysis in areas like commercial markets, products, maps, software, and voting. It also discusses the importance of opinion mining in governance.
This document summarizes a research paper that proposes a dual-input single-stage inverter topology for standalone solar photovoltaic systems to provide electricity in rural areas without access to the electric grid. The proposed system uses a maximum power point tracking algorithm and boost converter to increase the low voltage from the solar panels. It then uses a single-stage boost inverter with sinusoidal pulse width modulation to efficiently convert the solar DC power to high-quality AC power for loads without additional filters or protections. Simulation and experimental results showed the system could boost input voltages and produce 230V AC output for rural electrification with reduced components compared to traditional two-stage inverter designs.
The Performance Analysis of a Fettling Shop Using SimulationIOSR Journals
Fettling shop is the product finishing shop of casting products.After the knockout, the casting is taken
to the fettling shop for doing the fettling work. The fettling process includes cutting, shot blasting, grinding and
painting. In all these process the sand and extra metal on the castings are removed. The project titled „The
performance analysis of a fettling shop using simulation‟ is based on a fettling shop of a casting industry. The
main aim of the project is the performance analysis of the fettling shop. This project is a simulation based
project and is done using a simulation tool called arena. The main concepts related with the performance
analysis are Bottleneck analysis, Productivity analysis and System improvement analysis.
Optimized Traffic Signal Control System at Traffic Intersections Using VanetIOSR Journals
Abstract: Traditional Automated traffic signal control systems normally schedule the vehicles at intersection in
a pre timed slot manner. This pre-timed controller approach fails to minimize the waiting time of vehicles at the
traffic intersection as it doesn’t consider the arrival time of vehicles. To overcome this problem an adaptive and
intelligent traffic control system is proposed in such a way that a traffic signal controller with wireless radio
installed at the intersection and it is considered as an infrastructure. All the vehicles are equipped with onboard
location, speed sensors and a wireless radio to communicate with the infrastructure thereby VANET is formed.
Once the vehicles enter into the boundary of traffic area, they broadcast their positional information as data
packet with their encapsulated ID in it. The controller at the intersection receives the transmitted packets from
all the legs of intersection and then stores it in a temporary log file. Now the controller runs Platooning
algorithm to group the vehicles approximately in equal size of platoons. The platoons are formed on the basis of
data disseminated by the vehicles. Then the controller runs Oldest Job First algorithm which treats platoons as
jobs. The algorithm schedules jobs in conflict free manner and ensures all the jobs utilize equal processing time
i.e the vehicles of each platoons cross the intersection at equal delays. The proposed approach is evaluated
under various traffic volumes and the performance is analyzed.
Keywords Conflict graphs, online job scheduling, traffic signal control, vehicular ad hoc network (VANET)
simulation, vehicle-actuated traffic signal control, Webster’s algorithm.
A Study on the Relationship between Nutrition Status and Physical Fitness of ...IOSR Journals
Abstract: Nutritional status during school age is a major determinant of nutritional and health status in adult
life. Many studies showed that under nutrition and anaemia had an adverse impact on performance and
consequently led to reduction in wages for persons employed in manual labour.
The past three decades have witnessed the emergence of over nutrition as a problem in school-age children in
developed countries and in affluent urban segments in developing countries. The main determinants of
performance are physical fitness and skill. Longitudinal studies have shown that the lifestyle and physical
fitness during childhood and adolescence were major determinants of lifestyle, physical fitness and freedom
from non-communicable diseases in adult life.
Intelligent Fault Identification System for Transmission Lines Using Artifici...IOSR Journals
Transmission and distribution lines are vital links between generating units and consumers. They are
exposed to atmosphere, hence chances of occurrence of fault in transmission line is very high, which has to be
immediately taken care of in order to minimize damage caused by it. This paper focuses on detecting the faults
on electric power transmission lines using artificial neural networks. A feed forward neural network is
employed, which is trained with back propagation algorithm. Analysis on neural networks with varying number
of hidden layers and neurons per hidden layer has been provided to validate the choice of the neural networks
in each step. The developed neural network is capable of detecting single line to ground and double line to
ground for all the three phases. Simulation is done using MATLAB Simulink to demonstrate that artificial
neural network based method are efficient in detecting faults on transmission lines and achieve satisfactory
performances. A 300km, 25kv transmission line is used to validate the proposed fault detection system.
Hardware implementation of neural network is done on TMS320C6713.
Vitality of Physics In Nanoscience and NanotechnologyIOSR Journals
This document discusses the vital role of physics in nanoscience and nanotechnology. It explains that at the nanoscale, physics is different due to quantum effects and a high surface area to volume ratio. Properties like band structure and optical properties can be altered at the nanoscale. The document also discusses manufacturing approaches like top-down and bottom-up methods and how they apply physics principles to create nanomaterials. Finally, it notes that nanomaterials can have significantly different properties than bulk materials of the same composition due to their small size and large surface area.
Vision Based Object’s Dimension Identification To Sort Exact Material IOSR Journals
This document describes a vision-based system using a robotic arm and image processing to identify the dimensions of objects and sort them. A 3 DOF robotic arm picks objects from a conveyor belt. A load cell first measures the weight and compares it to a preset value. If it matches, a camera captures an image that is processed using LabVIEW to determine the object's width and height in pixels. These are compared to preset dimension values. If dimensions match the weight, the object is sorted into an "accepted" pallet. Otherwise, it is placed in a "rejected" pallet. The system aims to accurately sort objects by both weight and size using integrated sensing, vision processing, and robotic manipulation.
This document summarizes an approach to enhance security in a content-based publish/subscribe system using identity-based encryption. It discusses using identity-based encryption to generate public and private keys for publishers and subscribers. When a publisher encrypts an event using attribute-based encryption, the encrypted event can only be decrypted by a subscriber if their private key matches the credential embedded in the encrypted event. This allows the encrypted event to be routed to the correct subscriber without revealing the event contents. The document evaluates the performance of the proposed approach through simulation studies.
The Effects of Industrial Environment, Innovation, and Government Policy on B...IOSR Journals
This research aims to provide information about the effects of industrial environment on business performance, industrial environment on business performance with innovation as moderating variable, innovation on business performance and innovation on business performance with government policy as moderating variable. The population of this research is all small industries especially Tenun Songket Riau in Pekanbaru City, Bengkalis Sub District, and Siak Sub District, as many as 330 business units. Sampling method used is proportional sampling with total sample of 110 business units. Structural Equation Modeling (SEM) is used as data analysis and to be processed with AMOS 16 software. The findings of this study are as follows: (1) the more dynamic industrial environment results better business performance of small industry of Riau Songket Weaving; (2) the more dynamic industrial environment supports innovation capability and impacts better business performance of small industry of Riau Songket Weaving; (3) the higher innovation capability of the business results better business performance of small industry of Riau Songket Weaving, and (4) the higher innovation capability and to be supported by conducive government policy impacts better business performance of small industry of Riau Songket Weaving.
QSAR Studies of the Inhibitory Activity of a Series of Substituted Indole and...inventionjournals
HF method, with the basis set 6-31G (d) was employed to calculate quantum some chemical descriptors of 37 substituted Indole. The best descriptors were selected to establish the quantitative structure activity relationship (QSAR) of the inhibitory activity against isoprenylcysteine carboxyl methyltransferase (Icmt), by principal components analysis (PCA), to a multiple regression analysis (MLR), to a nonlinear regression (RNLM) and to an artificial neural network (ANN). We accordingly propose a quantitative model and we interpret the activity of the compounds relying on the multivariate statistical analysis. This study shows that the MLR and have served to predict activity, but when compared with the results given by the ANN model. We concluded that the predictions achieved by this latter is more effective and much better than other models. The statistical results indicate that the model is statistically significant and shows very good stability towards data variation in the validation method. The contribution of each descriptor to the structure-activity relationship is evaluated.
Proteomics Practical (NMR and Protein 3D softwareiqraakbar8
The document discusses protein 3D structure determination using computational modeling software. It describes different computational modeling methods like homology modeling, threading/fold recognition, and ab initio modeling. Homology modeling involves comparing the target sequence to known protein structures while threading/fold recognition compares the target to known structural templates. Ab initio modeling produces structures based only on the sequence. Popular software tools for each method are discussed like Modeller, SwissModel, I-TASSER, and Rosetta. The document also provides an overview of using nuclear magnetic resonance (NMR) spectroscopy to study protein structures experimentally.
fMRI Segmentation Using Echo State Neural NetworkCSCJournals
This research work proposes a new intelligent segmentation technique for functional Magnetic Resonance Imaging (fMRI). It has been implemented using an Echostate Neural Network (ESN). Segmentation is an important process that helps in identifying objects of the image. Existing segmentation methods are not able to exactly segment the complicated profile of the fMRI accurately. Segmentation of every pixel in the fMRI correctly helps in proper location of tumor. The presence of noise and artifacts poses a challenging problem in proper segmentation. The proposed ESN is an estimation method with energy minimization. The estimation property helps in better segmentation of the complicated profile of the fMRI. The performance of the new segmentation method is found to be better with higher peak signal to noise ratio (PSNR) of 61 when compared to the PSNR of the existing back-propagation algorithm (BPA) segmentation method which is 57.
Xia Z., Gardner D.P., Gutell R.R., and Ren P. (2010).
Coarse-Grained Model for Simulation of RNA Three-Dimensional Structures.
The Journal of Physical Chemistry B, 114(42):13497-13506.
USING ARTIFICIAL NEURAL NETWORK IN DIAGNOSIS OF THYROID DISEASE: A CASE STUDYijcsa
Nowadays, one of the main issues to create challenges in medicine sciences by developing technology is the
disease diagnosis with high accuracy. In the recent decades, Artificial Neural Networks (ANNs) are considered as the best solutions to achieve this goal and involve in widespread researches to diagnose the diseases. In this paper, we consider a Multi-layer Perceptron (MLP) ANN using back propagation learning algorithm to classify Thyroid disease. It consists of an input layer with 5 neurons, a hidden layer with 6 neurons and an output layer with just 1 neuron. The suitable selection of activation function and the number of neurons in the hidden layer and also the number of layers are achieved using test and error method. Our simulation results indicate that the performed optimization in MLP ANNs can be reached the accuracy level to 98.6%.
The efficacy of neural network (NN) and partial least squares (PLS) methods is compared for the prediction of NMR chemical shifts for both 1H and 13C nuclei using very large databases containing millions of chemical shifts. The chemical structure description scheme used in this work is based on individual atoms rather than functional groups. The performances of each of the methods were optimized in a systematic manner described in this work. Both of the methods, least squares and neural network analysis, produce results of a very similar quality but the least squares algorithm is approximately 2-3 times faster.
1) The authors validate the performance of a neural network-based 13C NMR prediction algorithm using the publicly available NMRShiftDB database containing over 214,000 chemical shifts.
2) They find that the mean error between predicted and experimental shifts for the entire database is 1.59 ppm, with 50% of shifts predicted within 1 ppm error.
3) The database was divided based on whether shifts were present or absent from the training set used to develop the prediction algorithm. Slightly better accuracy was seen for shifts present in the training set.
This document describes a study that uses machine learning algorithms to efficiently predict DNA-binding proteins. Support vector machines and cascade correlation neural networks are optimized and compared to determine the best performing model. The SVM model achieves 86.7% accuracy at predicting DNA-binding proteins using features like overall charge, patch size, and amino acid composition of proteins. The CCNN model achieves lower accuracy of 75.4%. The study aims to improve on previous work by using the standard jack-knife validation technique to evaluate model performance on unseen data.
This paper introduces the Artifi cial Neural Networks (ANN) function to model probabilistic dependencies, in supervised classification tasks for discrimination between earthquakes and explosions problems. ANNs are regarded as the discriminating tools to classify the natural seismic events (earthquakes) from the artifi cial ones (Man-made explosions) based on the seismic signals recorded at regional distances. The bulk of our novel is to improve the obtained numerical results using this advance technique. The ANNs, by testing the different types of seismic features, showed the potential application of this method to discriminate the classes. During the above study, we found out that the Neural Networks have been used in a fully innovative manner in this work. Here the ARMA coefficients filters detects
the type of the source whenever a natural or artificial source changes the nature of the background noise of the seismograms. During the above study, we found out that this algorithm is sometimes capable to alarm the further natural seismological events just a little before the onset.
This document reports on a study of the molecular structure, vibrational spectra, and electronic properties of (E)-N0-(4-Methoxybenzylidene)pyridine-3-carbohydrazide dihydrate (MBP3CDÁ2H2O) using density functional theory calculations and experimental techniques. The authors synthesized the compound and characterized it using FTIR, FT-Raman, and UV–Vis spectroscopy. They then used DFT calculations to optimize the molecular geometry, simulate the vibrational spectra, and analyze properties like hyperpolarizability. The calculated spectra agreed well with experimental data. Analysis of molecular orbitals, reactivity, and thermodynamics provided insight into
Characterization of the phi29 Bacteriophage Nanomotorpcpchic
The purpose is to present the evolution of our understanding to the Bacillus subtilis phi29 bacteriophage viral motor\'s structure based on the work of a prominent scientist, Peixuan Guo.
Qualitative analysis of Fruits and Vegetables using Earth’s Field Nuclear Mag...IJERA Editor
Among the imaging techniques, magnetic resonance imaging (MRI) is a non-contact and a non-invasive technique to obtain images of the objects rich in water content and provides an excellent tool to study variation of contrast among the soft issues. It often utilizes a linear magnetic field gradient to obtain an image that combines the visualization of molecular structure and dynamics. It measures the characteristics of hydrogen nuclei of water and nuclei with similar chemical shifts, modified by chemical environment across the object. In the present work, MRI of fresh tomatoes has been recorded using Terranova-MRI for qualitative analysis. The technique is effective, powerful and reliable as an investigative tool in the quality analysis and diagnosis of infections in fruits and vegetables.
The document summarizes a study that investigated the interaction between the Hfq protein from E. coli bacteria and DNA. Specifically, it examined the binding of two regions of Hfq - the carboxyl terminal region (CTR) and mutations in the amino terminal region (NTR) - to DNA using techniques like isothermal titration calorimetry and electrophoretic mobility shift assays. The results showed that both the CTR and mutated NTR proteins bound to DNA to some degree. Atomic force microscopy was also used to characterize the self-assembly behavior of CTR peptides in order to aid interpretation of the calorimetry data.
PERFORMANCE ANALYSIS OF NEURAL NETWORK MODELS FOR OXAZOLINES AND OXAZOLES DER...ijistjournal
Neural networks have been used successfully to a broad range of areas such as business, data mining, drug discovery and biology. In medicine, neural networks have been applied widely in medical diagnosis, detection and evaluation of new drugs and treatment cost estimation. In addition, neural networks have begin practice in data mining strategies for the aim of prediction, knowledge discovery. This paper will present the application of neural networks for the prediction and analysis of antitubercular activity of Oxazolines and Oxazoles derivatives. This study presents techniques based on the development of Single hidden layer neural network (SHLFFNN), Gradient Descent Back propagation neural network (GDBPNN), Gradient Descent Back propagation with momentum neural network (GDBPMNN), Back propagation with Weight decay neural network (BPWDNN) and Quantile regression neural network (QRNN) of artificial neural network (ANN) models Here, we comparatively evaluate the performance of five neural network techniques. The evaluation of the efficiency of each model by ways of benchmark experiments is an accepted application. Cross-validation and resampling techniques are commonly used to derive point estimates of the performances which are compared to identify methods with good properties. Predictive accuracy was evaluated using the root mean squared error (RMSE), Coefficient determination(R2), mean absolute error(MAE), mean percentage error(MPE) and relative square error(RSE). We found that all five neural network models were able to produce feasible models. QRNN model is outperforms with all statistical tests amongst other four models.
Study Of The Fault Diagnosis Based On Wavelet And Fuzzy Neural Network For Th...IJRES Journal
In the fault diagnosis of the motor, the vibration signals can fully reflect the status of the motor. In this paper, on the basis of wavelet packet fault feature extraction, a new approach for motor fault diagnosis based on wavelet packet analysis and fuzzy RBF neural network was presented.The method gains the energy of characteristic channel of bearing failure vibration signals of asynchronous motor, which adopts the technology of wavelet packet analysis. It also composes the characteristics of the vector as input of fuzzy RBF neural network, used to diagnose the induction motor bearing failures. The method overcomes the slow convergence, a long training time, local minimum problems when using BP neural network. Experimental results shows that using fuzzy RBF neural network can improve the accuracy of the motor fault diagnosis.
This document summarizes research on using artificial neural networks (ANNs) to automatically analyze and classify surface electromyography (SEMG) signals. The researchers:
1) Collected SEMG data from normal subjects and those with myopathies during muscle contractions. They extracted features using autoregressive (AR) modeling of signal segments.
2) Compared the classification performance of ANNs (backpropagation, self-organizing feature map, probabilistic neural network) to Fisher's linear discriminant analysis. The ANNs achieved over 90% correct classification while the linear method was poorer.
3) Concluded that properly processed SEMG combined with ANN classification can provide an automated diagnostic assist tool for physicians to help
Crude Oil Price Prediction Based on Soft Computing Model: Case Study of IraqKiogyf
This paper proposes using a multi-layer perceptron neural network (MLP-NN) soft computing model to accurately predict future crude oil prices in Iraq. The performance of the MLP-NN model is compared to other neural network approaches and found to perform better, especially with limited training data and high parameter variability. The paper describes the MLP-NN model and its training process using a dataset of Iraqi crude oil prices from 1990 to 2018. Features like mutual information analysis and data normalization are used as part of the model building process.
Construction of phylogenetic tree from multiple gene trees using principal co...IAEME Publication
This document describes a method for constructing a phylogenetic tree from multiple gene trees using principal component analysis. Multiple gene trees are generated from different protein sequences from various organisms. Distance matrices are calculated for each gene tree and combined into a single data matrix. Principal component analysis is performed on the data matrix to extract the first principal component, which represents the consensus distance vector combining information from all gene trees. A phylogenetic tree is then generated from the consensus distance vector using UPGMA, providing a species tree that integrates information from multiple genes. The method is demonstrated on protein sequence data from primates and placental mammals.
Performance analysis of neural network models for oxazolines and oxazoles der...ijistjournal
Neural networks have been used successfully to a br
oad range of areas such as business, data mining, d
rug
discovery and biology. In medicine, neural network
s have been applied widely in medical diagnosis,
detection and evaluation of new drugs and treatment
cost estimation. In addition, neural networks have
begin practice in data mining strategies for the a
im of prediction, knowledge discovery. This paper
will
present the application of neural networks for the
prediction and analysis of antitubercular activity
of
Oxazolines and Oxazoles derivatives. This study pre
sents techniques based on the development of Single
hidden layer neural network (SHLFFNN), Gradient Des
cent Back propagation neural network (GDBPNN),
Gradient Descent Back propagation with momentum neu
ral network (GDBPMNN), Back propagation with
Weight decay neural network (BPWDNN) and Quantile r
egression neural network (QRNN) of artificial
neural network (ANN) models Here, we comparatively
evaluate the performance of five neural network
techniques. The evaluation of the efficiency of eac
h model by ways of benchmark experiments is an
accepted application. Cross-validation and resampli
ng techniques are commonly used to derive point
estimates of the performances which are compared to
identify methods with good properties. Predictiv
e
accuracy was evaluated using the root mean squared
error (RMSE), Coefficient determination(
), mean
absolute error(MAE), mean percentage error(MPE) and
relative square error(RSE). We found that all five
neural network models were able to produce feasible
models. QRNN model is outperforms with all
statistical tests amongst other four models.
Similar to Identification of Skeleton of Monoterpenoids from 13CNMR Data Using Generalized Regression Neural Network (GRNN) (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
BIRDS DIVERSITY OF SOOTEA BISWANATH ASSAM.ppt.pptxgoluk9330
Ahota Beel, nestled in Sootea Biswanath Assam , is celebrated for its extraordinary diversity of bird species. This wetland sanctuary supports a myriad of avian residents and migrants alike. Visitors can admire the elegant flights of migratory species such as the Northern Pintail and Eurasian Wigeon, alongside resident birds including the Asian Openbill and Pheasant-tailed Jacana. With its tranquil scenery and varied habitats, Ahota Beel offers a perfect haven for birdwatchers to appreciate and study the vibrant birdlife that thrives in this natural refuge.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptx
Identification of Skeleton of Monoterpenoids from 13CNMR Data Using Generalized Regression Neural Network (GRNN)
1. IOSR Journal of Applied Chemistry (IOSR-JAC)
e-ISSN: 2278-5736.Volume 8, Issue 1 Ver. II. (Jan. 2015), PP 11-19
www.iosrjournals.org
DOI: 10.9790/5736-08121119 www.iosrjournals.org 11 |Page
Identification of Skeleton of Monoterpenoids from 13
CNMR Data
Using Generalized Regression Neural Network (GRNN)
Taye Temitope Alawode1
, Kehinde Olukunmi Alawode2
1
Department of Chemical Sciences, Federal University Otuoke, Bayelsa State, Nigeria
2
Department of Electrical and Electronic Engineering, Osun State University, Osogbo, Osun State, Nigeria
Abstract: This paper describes the use of Generalized Regression Neural Network (GRNN) in the identification
of various skeletons of monoterpenoid compounds from their 13
C NMR chemical shift data. Towards this end,
13
C NMR chemical shift data of skeletons of 328 compounds belonging to various classes of monoterpenoids
were used as input for the network. To generate the output data for the network, each compound belonging to a
skeletal class was assigned a code of 1while every other possible skeleton types were given codes of 0. These
data were used to train the network at varying spread constant values. After training, the network was simulated
using 113 test compounds. At a spread constant of 15, the network had between 99.98 and 100% recognition of
Myrcane skeleton, 100% recognition of the Santoline skeleton and 87.63 - 100% recognition of the Menthane
skeleton. The network, however, could not identify successfully the Bornane and Pinane skeletons. To correct
this anomaly, the training data for these classes of compounds were increased and the data re-trained. The
results obtained improved considerably with between 68.25% and 99.95% recognition of the Bornane skeleton
and 83.86% to 100% recognition of the Pinane skeleton. GRNN could be a powerful complimentary tool in the
elucidation of structures of monoterpenoids.
Keywords: 13
C NMR, GRNN, Monoterpenoid, Simulation, Skeleton.
I. Introduction
Structural determination of natural products usually requires vast experience in spectral analysis. The
fundamental stage in the process of structural elucidation is the determination of the compound carbon skeleton
as this forms the basic unit to which the substance belongs. However, this is often difficult owing to high
structure variety and diversity encountered in natural products chemistry. Studies in structural elucidation of
monoterpenoids are of importance because this class of naturally occurring compounds possesses important
pharmacological activities [1]. The advent of Computer Assisted Structural Elucidation (CASE) methods has
simplified the process of interpretation of complex organic compounds, especially in the field of natural
products. A high-quality reference library containing both structures and complete spectra or substructures and
subspectra being representative of the types of compounds encountered in the laboratory, is an invaluable
component for a CASE system [2, 3]. The premise implicit in the spectrum interpretation is that if the spectrum
of the unknown and a reference library spectrum have a subspectrum in common, then the corresponding
reference substructure is also present in the unknown. The components generated by spectra interpretation are
fed into the structure generator, which will exhaustively generate all possible structures from these components.
Examples of structure generators include MOLGEN, GENIUS and COCON. Their applications are described
elsewhere [4]. Procedures that utilize 13
C NMR for skeleton identification have been previously developed and
utilized with excellent results [5, 6, 7, 8].
Rufino et al [9] applied Artificial Neural Networks in the identification of skeletons of Aporphine
alkaloids from 13
C NMR data asserted that ANNs because of their parallel nature can speed up the process of
structural elucidation. ANNs have been applied to the prediction of biological activity of natural products or
congeneric compounds [10, 11], the identification, distribution and recognition of patterns of chemical shifts
from 1
H-NMR spectra [12,13] and identification of chemical classes through 13
C-NMR spectra [14]. ANNs are
computational models derived from a simplified concept of the brain, in which a number of nodes, called
neurons, are interconnected in a network-like structure [15]. Fig.1 shows a single neuron model.
2. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 12 |Page
Figure 1: Single Neuron Model
Neural networks are nonlinear processes that perform learning and classification. Artificial neural
networks consist of a large number of interconnected processing elements known as neurons that act as
microprocessors. Each neuron accepts a weighted set of inputs and responds with an output. In general, neural
networks are adjusted/ trained to reach, from a particular input, a specific target output until the network output
matches the target. Hence the neural network can learn the system. The learning ability of a neural network
depends on its architecture and applied algorithmic method during the training. A neural network is usually
divided into three parts: the input layer, the hidden layer and the output layer. The information contained in the
input layer is mapped to the output layers through the hidden layers.
In the present work, we show that Generalized Regression Neural Networks (GRNNs), one of the
architectures of Artificial Neural networks can identify the skeletons of unknown monoterpenoid compounds
among different (monoterpenoid) skeletons-Myrcane and Santoline (alicyclic monoterpenoids), Menthane
(monocyclic monoterpenoids), Thujane, Bornane, Isocamphane and Fenchane (bicyclic monoterpenoids), and
Pinane (a tricyclic monoterpenoid). Generalized Regression Neural Networks consists of four layers: input
layer, pattern layer, summation layer and output layer as shown in Fig. 2. The theory of Generalized Regression
Neural Networks has been described elsewhere [16].
Figure 2: General Structure of GRNN
Compared to other ANN models such as the backpropagation neural network model, the GRNN needs
only a fraction of the training samples a backpropagation neural network would need. Therefore it has the
advantage that it is able to converge to the underlying function of the data with only few training samples
available [17]. Furthermore, since the task of determining the best values for the several network parameters is
difficult and often involves some trial and error methods, GRNN models require only one parameter (the spread
constant) to be adjusted experimentally. This makes GRNN a very useful tool to perform predictions and
comparisons of system performance in practice. Previous works relating the predictive capability of GRNN to
backpropagation neural network and other nonlinear regression techniques highlighted the advantages of GRNN
to include excellent approximation ability, fast training time, and exceptional stability during the prediction
stage [18,19].
3. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 13 |Page
II. Materials And Methods
For identification purposes and for structural elucidation of new compounds, it is necessary to have
access to extensive list of their structural data. In the present study, we made use of structural (skeletal) 13
C data
of compounds reviewed and published by [20]. This information can be extracted from data of monoterpenoids
published in literature by isolating 13
C values of the skeletal (carbon) from those of the substituents. ANNs work
through learning method, their training must, therefore, be done with the use of detailed and correct data to avoid
an erroneous learning process. A total of 441 compounds were employed in this study. Of these, 113 were
reserved for use as test cases (these were not used in training the neural network). These included 33 Mrycane, 3
Santoline, 38 Menthane, 5 Thujane, 12 Bornane, 3 isocampahane, 15 Pinane and 4 Fenchane monoterpenoid
compounds. ANNs learn through examples and the test compounds are selected based on the representativeness
of their skeletons among data used for training. The skeletons of the compounds and the numbering of the carbon
atoms are shown in Fig. 3.
Myrcane Skeleton Thujane Skeleton Isocamphane Skeleton Bornane Skeleton
Santoline Skeleton Pinane Skeleton Fenchane Skeleton
Figure 3: Skeletons of Monoterpenoid compounds used
Three Excel worksheets containing coded information on the input and target data for the training and
test compounds were prepared. On the first row of the first sheet, the compounds were assigned codes 1-328. In
the first column of the same sheet, the positions of each carbon atoms on the skeleton (as shown in Figure 3)
were coded as 1-10. The 13
C chemical shift data for each Carbon at each of the 10 positions was recorded for
each compound. These represent the input data subsequently used in training of the net. Another excel sheet in
the format just described was prepared except that it contained 13
C chemical shift data for the test compounds
(coded 1-113). The 13
C chemical shift data for skeletons of the test compounds are presented in Table 1. The
target data were prepared on the third excel sheet. The compounds were assigned codes 1-328 as previously
described. In the first column of the excel sheet, the eight different skeletons were listed vertically. Each
compound is identified as belonging to a particular skeleton by assigning it a code 1 or 0. A compound
belonging to a particular skeleton type is assigned a code of 1 while all the other compounds are assigned 0 for
that skeleton type.
Table 1: 13
C NMR Chemical Shift data for test compounds
4. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 14 |Page
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
5. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 15 |Page
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
Table 1 (continues): 13
C NMR Chemical Shift data for test compounds
After the construction of the worksheets, the data were transferred into the Neural Network toolbox of
MATLAB 7.8.0. From the command window, the „nntool‟ command was used to designate the imported data
appropriately as „input‟ or „target‟. The Generalized Regression Neural Network architecture was selected for
training of the skeleton-identification system at spread constants of 0.5, 1, 2.5, 5, 7.5, 9, 10, 12, 15, 17.5, 20, 25,
30, 50 and 100. The effectiveness of training at each value of spread constant was assessed by simulation with
the test data (not previously used for training and therefore unknown to the network). The aim was to ascertain
whether the neural network would be able to identify correctly the skeleton type to which each test compound
belong. The Generalized Neural Network (GRNN) at a spread constant of 15.0 was chosen as the baseline for
results presentation as all classes of compounds give reasonably good results at this value.
When it was observed that for the network could not identify with high accuracy compounds having
the Thujane, Bornane, Isocamphane, Pinane and Fenchane skeletons, the training data was increased. This was
done by adding to the original training set randomly selected compounds from the previously used set of test
compounds. The randomly selected compounds were from the classes of compounds whose skeletons were not
correctly predicted. This reduced the total number of test compounds to 93 comprising of 33 compounds with
the Myrcane skeleton, 3 with the Santoline skeleton, 38 compounds with the Menthane skeleton, 3 compounds
with the Thujane skeleton, 5 compounds with the Bornane skeleton, 3 with Isocamphane skeleton, 5 with Pinane
skeleton and 3 with the Fenchane skeleton. This procedure was carried out to ascertain whether the observed
inaccuracies were due to insufficient training data. Graphs of observed errors in individual prediction against
spread constant values for randomly selected compound(s) from each skeleton class are plotted to give an
insight into the range of spread constant values where the best results may be obtained. For Bornane, Pinane
and Fenchane, results obtained after re-training of the system were used. For these set of compounds, the GRNN
was trained at spread constants of between 1 and 25. This is because from the previously trained data set
(comprising of 113 compounds), it has been observed that least errors were obtained within this range.
III. Results And Discussion
The results obtained after training of the neural network and simulating with the original set of 113 test
data using GRNN are presented in Table 2. The probability that a compound belongs to a particular skeletal type
is expressed as percentages. (When a value of 1 is returned by the network for a particular skeletal type, there is
100% certainty that the unknown compound possess that skeleton while a value of 0 indicates a null
probability). If correctly predicted, compounds 1-33 should be Myrcane; 34-36 Santoline; 37-74 Menthane; 75-
79 Thujane; 80-91 Bornane; 92-94 Isocamphane; 95-109 Pinane; and 110- 113 Fenchane. The results showed
that out of the 33 Myracane compounds used as test data, the network had 99.98% - 100% recognition rate of 30
compounds. A recognition rate of 71.7% and 78.58% was observed for compounds 28 and 29 (with 28.23% and
21.41% probability respectively that these compounds had Thujane skeleton). Compound 31 was wrongly
predicted as Thujane skeleton (99.92%). The network had 100% recognition rate for the 3 compounds belonging
to the Santoline skeleton and 87.63 - 100% recognition for compounds belong to the class of Menthane
monoterpenoids. Of the 5 compounds with Thujane skeleton tested, 2 were erroneously predicted to have Pinane
106 107 108 109 110 111 112 113
C-1 43.4 38.2 43.4 67.9 53.9 53.5 60.4 52.5
C-2 147.8 151.6 147.8 138.7 222.1 221.6 221.6 218.4
C-3 117.0 147.5 117.0 118.4 47.2 45.3 47.1 54.6
C-4 80.8 31.3 31.6 33 45.3 50.3 44.6 41
C-5 41.0 40.7 41.0 62.6 25 77.8 36 24.8
C-6 38.0 37.6 38.0 30.1 31.8 41.8 76.7 32
C-7 31.2 33.0 31.2 205.8 41.6 38.1 38.5 41.4
C-8 21.1 25.7 21.1 27.3 23.3 23.8 23.8 49
C-9 26.2 20.9 26.2 14.7 21.7 21.5 21.6 18.2
C-10 65.8 191.0 65.8 23 14.6 14.6 12 14.4
6. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 16 |Page
skeleton. For compounds 80-88 (all belonging to the Bornane series), the network could not identify the
compounds as belonging to any specific skeleton as the probabilities were almost evenly distributed between
Menthane, Bornane, Pinane and Fenchane skeleton types. The network could only identify 2 of the 4
Isocamphane compounds at 85.99% and 86.47% and wrongly predicted most of the compounds belonging to the
Pinane class as Thujane (with lesser probabilities as Myrcane). Also, only 2 of the 4 Fenchane compounds were
recognized at 61.28% and 76.74%.
Table 2: Probability of the Test Compound To Belong to the Skeletons Researched (σ =15 )
Tested
Skeletons
Tested compounds (%)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Myrcane 100 100 100 100 100 100 100 100 100 100 100 100 0 100 100
Santoline 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Menthane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Thujane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bornane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Isocamphane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Pinane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Fenchane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Table 2 (continues): Probability of the Test Compound To Belong to the Skeletons Researched(σ=15 )
Tested
Skeletons
Tested compounds (%)
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Myrcane 100 100 100 100 100 100 99.98 99.99 99.98 100 100 100 71.70 78.58 100
Santoline 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Menthane 0 0 0 0 0 0 0.02 0.01 0.02 0 0 0 0 0 0
Thujane 0 0 0 0 0 0 0 0 0 0 0 0 28.23 21.41 0
Bornane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Isocamphane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Pinane 0 0 0 0 0 0 0 0 0 0 0 0 0.07 0.01 0
Fenchane 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched(σ=15 )
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched (σ=15 )
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched (σ=15 )
7. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 17 |Page
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched(σ=15 )
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched (σ=15 )
Table 2(continues): Probability of the Test Compound To Belong to the Skeletons Researched (σ=15 )
Tested
Skeleton
Tested Compounds (%)
106 107 108 109 110 111 112 113
Myrcane 58.98 0 0 16.14 0 0 0 0
Santoline 0 0 0 0 0 0 0 0
Menthane 0 0 0 0 0 0 0 0
Thujane 40.54 0 96.52 0 0 0 0 0
Bornane 0 0 0 0 87.86 38.72 23.26 85.80
Isocamphane 0 0 0 0 0 0 0 0
Pinane 0.50 1 3.47 83.86 0 0 0 0
Fenchane 0 0 0 0 12.14 61.28 76.74 14.20
To ascertain whether the inadequacies observed especially in results involving Thujane, Bornane,
Isocampahne, Pinane and Fenchane compounds were due to insufficient training data, the number of the training
data were increased as previously described. After training and simulating with the 93 compounds whose 13
C
NMR values are used as the test data, at the baseline spread constant value of 15, the Bornane skeletons are now
recognized at 68.25%, 70.82%, 99.10%, 99.95% and 99.95% for the 5 test compounds used. The network also
had between 83.86 and 100% recognition of the 5 compounds with Pinane skeleton used for the test. No
significant improvement was obtained for Isocamphane, Thujane and Fenchane skeletons. This is expected since
out of a total of 20 compounds added to the original training set (of 328 compounds), 7, 10, 2, 1and 0 belong to
the Bornane, Pinane, Thujane, Fenchane and Isocamphane classes respectively. Fewer numbers of compounds
from the Thujane, Fenchane and Isocamphane classes were used for the re-training because only 5, 4 and 3
respectively from these classes were present in the original set of test data. The predictive ability of GRNN
might have been affected by the size of the learning database. In a previous work, Ferreira et al (1998) showed
that the expert system SISTEMAT could only predict the pinane skeleton types with only 0.714 accuracy,
implying that other skeletons also appear but with low statistical significance. In their pioneering work, Rufino
et al (2005) showed that ANN methods give fast and accurate results for identification of skeletons and for
assigning unknown compounds among distinct fingerprints (skeletons) of aporphine alkaloids. The computation
method is much faster than the utilization of traditional methods for skeleton prediction as the time-consuming
sequential search (especially for large spectra library) and matching procedures (sequential comparison of an
unknown target spectrum with the set of library spectra) employed by the conventional databases is avoided.
This makes neural networks ideal for selecting results for structure generators or checking the entries of a
database. If a large number of skeletons have to be predicted or a fast and easy check of a structure is necessary,
this approach is advantageous. Moreover, the large amount of the disk space for saving the database or long
time for loading data from external computers will no longer be necessary.
From Fig 4 (below), it could be observed that the spread constant ranges over which excellent
prediction results were obtained seems to be specific for each skeleton class. Best prediction of the Myrcane
skeleton, was within spread constant range of 10 – 30; and for Menthane skeleton best results were obtained
between 5 and 30.
8. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 18 |Page
Figure 4: Graphs of observed errors in individual prediction against spread constant values
Though available test data are few, one can cautiously infer that the best predictions appears to be
obtained within the spread constant range of 1 - 20 for Bornane skeleton, 7.5 -20 for Santoline skeleton and 5-20
for Pinane skeleton. Within these broad ranges of values, errors in prediction were zero in most cases. The
variation of the generalized error with change in spread constant is an important parameter to access the efficacy
of any GRNN. A network that gives a constant error for a broad range of spread constant is considered better
since designers can choose from a wide range of spread constant values for their network.
0
0.5
1
0.05
5
12
25
Error
Spread Constant
compd 1
compd 5
compd 10
compd 15
compd 20
compd 23
compd 26
Myracane Compounds
0
0.5
1
0
1
5
9
1
1
2
5
Error
Spread Constant
compd 34
compd 35
compd 36
Santoline Compounds
0
0.5
1
0.05
5
12
25
Error
Spread Constant
compd 61
compd 74
compd 37
compd 40
compd 45
compd 50
compd 55
Menthane Compounds
0
0.5
1
0.05
2.5
9
15
25
100
Error
Spread Constant
compd 75
compd 76
comp 77
Thujane Compounds
0
0.5
1
1 10 20
Error
Spread Constant
Compd 87
Compd 89
Compd 88
compd 90
compd 91
Bornane Compounds
0
0.5
1
1 5 10 15 20 25
Error
Spread Constant
Compd 93
Compd 92
Compd 94
Isocamphane Compounds
0
0.5
1
1 10 20
Error
Spread Constant
Compd 106
Compd 108
Compd 104
Compd 105
Compd 107
Pinane Compounds
0
0.5
1
1 5 10 15 20 25
Error
Spread Constant
Compd 112
Compd 111
Fenchane Compounds
9. Identification of Skeleton of Monoterpenoids from 13
C NMR Data Using GRNN
DOI: 10.9790/5736-08121119 www.iosrjournals.org 19 |Page
IV. Conclusion
From this study, it could be seen that the predictions obtained using GRNN are in good agreement with
the actual skeletons of the compounds tested. The network was tested with compounds belonging to diverse
skeleton types and good results were obtained in almost all the cases. The quality of predictions of the network,
however, depends on the availability of sufficiently diverse training data (covering adequately all the classes of
monoterpenoid compounds) for the network. GRNN, could therefore be a powerful complimentary tool in
structural elucidation of monoterpenoids.
References
[1]. N.H. Fischer, Plant terpenoids as allelopathic agents in Harbone J.B. and Tomas-Barberan F.A. (Eds.) Ecological Chemistry and
Biochemistry of Plant Terpenoids,. Clarendon Press, Oxford, 1991, 377.
[2]. M. E. Elyashberg, K.A. Blinov, A. J. Williams, E. R. Martirosian and S. G. Molodtsov, Application of a new expert system for the
structure elucidation of natural products from their 1D and 2D NMR data. Journal of Natural Products. 65, 2002, 693-703.
[3]. I.I. Strokov and K. S. Lebedev, Computer aided method for chemical structure elucidation using spectral databases and C-13 NMR
correlation tables. Journal of Chemical Information & Computer Sciences. 39, 1999, 659-665.
[4]. J. Meiler and M. Kock, Novel Methods of Automated Structure Elucidation based on 13C NMR Spectroscopy. Magn. Reson.
Chem. 42, 2004, 1042–1045.
[5]. M.J.P. Ferreira, G.V. Rodrigues, A.J.C. Brant and V.P. Emerenciano, REGRAS: an auxiliary program for pattern recognition and
substructure elucidation of monoterpenes. Spectroscopy. 15, 2000, 65–98.
[6]. M.J.P. Ferreira, A. J.C Brant., G.V. Rodrigues and V.P. Emerenciano, Automatic identification of terpenoid skeletons through 13
C
nuclear magnetic resonance data disfunctionalization. Analytica Chimica Acta. 429, 2001, 151–170.
[7]. M.J.P.Ferreira, F.C.Oliveira, S.A.V.Alvarenga, P.A.T. Macari, G.V.Rodrigues and V.P. Emerenciano, Automatic identification by
13
C NMR of substituent groups bonded in natural product skeletons. Computers & Chemistry. 26, 2002, 601–632.
[8]. G.V. Rodrigues, I.P.A. Campos and V.P Emerenciano, Applications of artificial intelligence to structure determination of organic
compounds **. Determination of groups attached to skeleton of natural products using 13
C Nuclear Magnetic Resonance
Spectroscopy. Spectroscopy, 1997,191-200.
[9]. A.A. Rufino, A .J. C. Brant, J. B. O. Santos, M.J.P. Ferreira and V.P. Emerenciano, Simple Method for Identification of Aporphine
Alkaloids from 13
C NMR Data Using Artificial Neural Networks. J. Chem. Inf. Model. 45, 2005, 645-651.
[10]. P. Wrede, O. Landt, S. Klages, A. Faterni, U. Hahn and G. Schneider, Peptidase design aided by neural networks: biological
activity of artificial signal peptidase I cleavage sites. Biochemistry, 37, 1998, 3588-3593.
[11]. M.B. Fernandes, M.T. Scotti, M.J.P. Ferreira and V.P. Emerenciano, Use of self-organizing maps and molecular descriptors to
predict the cytotoxic activity of sesquiterpene lactones. European Journal of Medicinal Chemistry. 43, 2008, 2197-2205.
[12]. J. Aires-de-Sousa, M. Hemmer and J. Gasteiger, “Prediction of 1H NMR Chemical Shifts Using Neural Networks”. Analytical
Chemistry, 74(1), 2002, 80-90.
[13]. Y. Binev and J. Aires-de-Sousa, "Structure-Based Predictions of 1H NMR Chemical Shifts Using Feed-Forward Neural Networks".
Chem. Inf. Comput. Sci., 44, 2004, 940-945.
[14]. L. Fraser and D. A. Mulholland , A robust technique for group classification of the C-13 NMR spectra of natural products from
Meliaceae. Fresenius J Anal Chem.,. 365, 1999, 631-634.
[15]. M.T. Scott, V. Emerenciano, M.J.P. Ferreira, L. Scotti, R. Stefani, M.S. da Silva and F.J.B. Mendonça Junior, Self-Organizing
Maps of Molecular Descriptors for Sesquiterpene Lactones and Their Application to the Chemotaxonomy of the Asteraceae Family.
Molecules. 17, 2012, 4684-4702.
[16]. S.A. Hannan, R.R. Manza, and R.J. Ramteke, Generalized regression neural network and radial basis function for heart disease
diagnosis. International Journal of Computer Applications, 7(13), 2010, 7-13.
[17]. Specht, A General Regression Neural Network. IEEE Transactions on Neural Networks. 2(6), 1991,568-576.
[18]. G. Sun, S. J. Hoff, B. C. Zelle and M. A. Nelson, Development and comparison of Backpropagation and Generalized regression
neural network models to predict diurnal and seasonal gas and pm10 Concentrations and emissions from swine buildings. American
Society of Agricultural and Biological Engineers. 51(2), 2008, 685-694.
[19]. C. Mahesh, E. Kannan and M. S. Saravanan (2014). Generalized regression neural network based expert system for hepatitis b
diagnosis. Journal of Computer Science. 10 (4), 2014, 563-569.
[20]. M. J. P. Ferreiraa, V. P. Emerenciano, G. A. R. Linia, P. Romoff,P. A. T. Macarib and G. V. Rodrigues, 13
C NMR spectroscopy of
monoterpenoids. Progress in Nuclear Magnetic Resonance Spectroscopy, 33, 1998, 153–206.