The presentation discusses a new method for imaging seismic data that was recently implemented on the SmartGeo cloud computing portal (https://smartgeo.crs4.it/enginframe/eiagrid/eiagrid.xml). The method is particularly suited for near-surface applications such as geotechnical engineering or environmental studies. It is shown that instead of limiting the stacking velocity analysis to single Common-Midpoint-Gathers, groups of neighboring Common-Midpoint-Gathers gathers are considered to identify entire reflection surfaces in the data. As a result the extracted kinematic properties of the subsurface, e.g. wave-propagation velocities, are more reliable and the final data stacking leads to a more detailed subsurface image even in case of noisy prestack data and laterally strongly variable velocities. In the second part of the presentation, the successful application of the proposed method is discussed in a case study based on a ultra-shallow seismic SH-wave data set recorded close to Teulada, Sardinia, Italy.
Viene presentato e discusso (in inglese) in dettaglio l'utilizzo della piattaforma EIAGRID/SmartGEO in due casi studio significativi per le applicazioni geotecniche e ambientali. Al termine, l'utente interessato dovrebbe essere in grado di utilizzare in modo autonomo la piattaforma attraverso il portale SmartGEO.
Hyperon and charmed baryon masses and axial charges from Lattice QCDChristos Kallidonis
Poster presented at the Electromagnetic Interactions on Nucleons and Nuclei 2013 (EINN2013) Conference, held in Paphos, Cyprus. We present results on the masses and axial charges of all forty light, strange and charm baryons, obtained from Lattice QCD simulations
From Weather Dwarfs to Kilometre-Scale Earth System Simulationsinside-BigData.com
In this deck from PASC18, Nils P. Wedi from ECMWF presents: From Weather Dwarfs to Kilometre-Scale Earth System Simulations.
"The increasingly large amounts of data being produced b weather and climate simulations and earth system observations is sometimes characterised as a deluge. This deluge of data is both a challenge and an opportunity. The main opportunities are to make use of this wealth of data to 1) improve knowledge by extracting additional knowledge from the data and 2) to improve the quality of the models themselves by analysing the accuracy, or lack thereof, of the resultant simulation data. An example of the former case is improved prediction of large scale phenomena such as El Nino. An example of the latter is the improvement of a Physics parameterisation scheme through detailed analysis of the errors in a large number of datasets.
"One way to realize these opportunities is to use machine learning approaches. As machine learning in weather and climate is a relatively new topic this minisymposium introduces the audience to how machine learning could be used in weather and climate and outlines its implications in terms of computing costs. To ground the ideas in concrete examples it also illustrates the use of machine learning in the weather and climate domain with practical examples."
Watch the video: https://wp.me/p3RLHQ-iPB
Learn more: https://pasc18.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
GPR Probing of Smoothly Layered Subsurface Medium: 3D Analytical ModelLeonid Krinitsky
An analytical approach to GPR probing of a
horizontally layered subsurface medium is developed, based on the coupled-wave WKB approximation. An empirical model of current in dipole transmitter antenna is used.
Computing the masses of hyperons and charmed baryons from Lattice QCDChristos Kallidonis
Poster presented at the Computational Sciences 2013 Conference (Winner of poster competition). We present results on the masses of all forty light, strange and charm baryons from Lattice QCD simulations, focusing particularly on the computational aspects and requirements of such calculations.
The presentation discusses a new method for imaging seismic data that was recently implemented on the SmartGeo cloud computing portal (https://smartgeo.crs4.it/enginframe/eiagrid/eiagrid.xml). The method is particularly suited for near-surface applications such as geotechnical engineering or environmental studies. It is shown that instead of limiting the stacking velocity analysis to single Common-Midpoint-Gathers, groups of neighboring Common-Midpoint-Gathers gathers are considered to identify entire reflection surfaces in the data. As a result the extracted kinematic properties of the subsurface, e.g. wave-propagation velocities, are more reliable and the final data stacking leads to a more detailed subsurface image even in case of noisy prestack data and laterally strongly variable velocities. In the second part of the presentation, the successful application of the proposed method is discussed in a case study based on a ultra-shallow seismic SH-wave data set recorded close to Teulada, Sardinia, Italy.
Viene presentato e discusso (in inglese) in dettaglio l'utilizzo della piattaforma EIAGRID/SmartGEO in due casi studio significativi per le applicazioni geotecniche e ambientali. Al termine, l'utente interessato dovrebbe essere in grado di utilizzare in modo autonomo la piattaforma attraverso il portale SmartGEO.
Hyperon and charmed baryon masses and axial charges from Lattice QCDChristos Kallidonis
Poster presented at the Electromagnetic Interactions on Nucleons and Nuclei 2013 (EINN2013) Conference, held in Paphos, Cyprus. We present results on the masses and axial charges of all forty light, strange and charm baryons, obtained from Lattice QCD simulations
From Weather Dwarfs to Kilometre-Scale Earth System Simulationsinside-BigData.com
In this deck from PASC18, Nils P. Wedi from ECMWF presents: From Weather Dwarfs to Kilometre-Scale Earth System Simulations.
"The increasingly large amounts of data being produced b weather and climate simulations and earth system observations is sometimes characterised as a deluge. This deluge of data is both a challenge and an opportunity. The main opportunities are to make use of this wealth of data to 1) improve knowledge by extracting additional knowledge from the data and 2) to improve the quality of the models themselves by analysing the accuracy, or lack thereof, of the resultant simulation data. An example of the former case is improved prediction of large scale phenomena such as El Nino. An example of the latter is the improvement of a Physics parameterisation scheme through detailed analysis of the errors in a large number of datasets.
"One way to realize these opportunities is to use machine learning approaches. As machine learning in weather and climate is a relatively new topic this minisymposium introduces the audience to how machine learning could be used in weather and climate and outlines its implications in terms of computing costs. To ground the ideas in concrete examples it also illustrates the use of machine learning in the weather and climate domain with practical examples."
Watch the video: https://wp.me/p3RLHQ-iPB
Learn more: https://pasc18.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
GPR Probing of Smoothly Layered Subsurface Medium: 3D Analytical ModelLeonid Krinitsky
An analytical approach to GPR probing of a
horizontally layered subsurface medium is developed, based on the coupled-wave WKB approximation. An empirical model of current in dipole transmitter antenna is used.
Computing the masses of hyperons and charmed baryons from Lattice QCDChristos Kallidonis
Poster presented at the Computational Sciences 2013 Conference (Winner of poster competition). We present results on the masses of all forty light, strange and charm baryons from Lattice QCD simulations, focusing particularly on the computational aspects and requirements of such calculations.
TRB 2014 - Automatic Spatial-temporal Identification of Points of Interest in...Sean Barbeau
Presented at the Transportation Research Board 2014 meeting - Past research in travel surveys has shown that a GPS mobile phone-based survey is a useful tool for collecting information about individuals. While a passive travel survey collection is preferred to an active travel survey method, passive collection remains a challenge due to a lack of high accuracy algorithms to automatically identify trip starts and trip ends. This paper presents Automatic Spatial Temporal Identication of Points of Interest (ASTIPI), an unsupervised spatial temporal algorithm to identify POIs. ASTIPI utilizes the temporal and spatial properties of the dataset to obtain a high accuracy of POI identication, even on a reduced GPS dataset that uses techniques to conserve battery life on mobile devices. While reducing outliers within POIs, ASTIPI also has a linear running time and maintains the temporal orders of the location data so that arrival and departure information can be easily extracted and thus, users' trips can be quickly identied. Using real data from mobile devices,evaluations of ASTIPI and other existing algorithms are performed, showing that ASTIPI obtains the highest accuracy of POI identication with an average accuracy of 88% when performing on full datasets generated using the GPS Auto-Sleep module and an average accuracy of 59% when performing on reduced datasets generated using both the GPS Auto-Sleep module and the Critical Points algorithm.
(C) 2014 USF, Patent Pending
The melting of the West Antarctic ice sheet (WAIS) is likely to cause a significant rise in sea levels. Studying the present state of WAIS and predicting its future behavior involves the use of computer models of ice sheet dynamics as well as observational data. I will outline general statistical challenges posed by these scientific questions and data sets.
This discussion is based on joint work with Yawen Guan (Penn State/SAMSI), Won Chang (U. of Cincinnati), Patrick Applegate, David Pollard (Penn State)
Ikuro Sato's slide presented at ICONIP2017Ikuro Sato
Ikuro Sato's slide presented at International Conference of Neural Information Processing (ICONIP) 2017. This research work proposes a new update rule for asynchronous, distributed SGD training.
Model predictive-fuzzy-control-of-air-ratio-for-automotive-enginespace130557
Automotive engine air-ratio plays an important role of
emissions and fuel consumption reduction while maintains
satisfactory engine power among all of the engine control variables.
Aerospace Engineering projects in aerodynamics, finite element analysis, math...Quoc-Bao Nguyen
Slides in aerospace engineering projects in aerodynamics, finite element analysis, aerospace system design, math modelling, heat transfer, and database modelling.
Production decline analysis is a traditional means of identifying well production problems and predicting well performance and life based on real production data. It uses empirical decline models that have little fundamental justifications. These models include
•
Exponential decline (constant fractional decline)
•
Harmonic decline, and
•
Hyperbolic decline.
Precise Attitude Determination Using a Hexagonal GPS PlatformCSCJournals
In this paper, a method of precise attitude determination using GPS is proposed. We use a hexagonal antenna platform of 1 m diameter (called the wheel) and post-processing algorithms to calculate attitude, where we focus on yaw to prove the concept. The first part of the algorithm determines an initial absolute position using single point positioning. The second part involves double differencing (DD) the carrier phase measurements for the received GPS signals to determine relative positioning of the antennas on the wheel. The third part consists of Direct Computation Method (DCM) or Implicit Least Squares (ILS) algorithms which, given sufficiently accurate knowledge of the fixed body frame coordinates of the wheel, takes in relative positions of all the receivers and produces the attitude. Field testing results presented in this paper will show that an accuracy of 0.05 degrees in yaw can be achieved. The results will be compared with a theoretical error, which is shown by Monte Carlo simulation to be < 0.001 degrees. The improvement to the current state-of-the-art is that current methods require either very large baselines of several meters to achieve such accuracy or provide errors in yaw that are orders of magnitude greater.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
Design and Implementation of Variable Radius Sphere Decoding Algorithmcsandit
Sphere Decoding (SD) algorithm is an implement deco
ding algorithm based on Zero Forcing
(ZF) algorithm in the real number field. The classi
cal SD algorithm is famous for its
outstanding Bit Error Rate (BER) performance and de
coding strategy. The algorithm gets its
maximum likelihood solution by recursive shrinking
the searching radius gradually. However, it
is too complicated to use the method of shrinking t
he searching radius in ground
communication system. This paper proposed a Variabl
e Radius Sphere Decoding (VR-SD)
algorithm based on ZF algorithm in order to simplif
y the complex searching steps. We prove the
advantages of VR-SD algorithm by analyzing from the
derivation of mathematical formulas and
the simulation of the BER performance between SD an
d VR-SD algorithm.
TRB 2014 - Automatic Spatial-temporal Identification of Points of Interest in...Sean Barbeau
Presented at the Transportation Research Board 2014 meeting - Past research in travel surveys has shown that a GPS mobile phone-based survey is a useful tool for collecting information about individuals. While a passive travel survey collection is preferred to an active travel survey method, passive collection remains a challenge due to a lack of high accuracy algorithms to automatically identify trip starts and trip ends. This paper presents Automatic Spatial Temporal Identication of Points of Interest (ASTIPI), an unsupervised spatial temporal algorithm to identify POIs. ASTIPI utilizes the temporal and spatial properties of the dataset to obtain a high accuracy of POI identication, even on a reduced GPS dataset that uses techniques to conserve battery life on mobile devices. While reducing outliers within POIs, ASTIPI also has a linear running time and maintains the temporal orders of the location data so that arrival and departure information can be easily extracted and thus, users' trips can be quickly identied. Using real data from mobile devices,evaluations of ASTIPI and other existing algorithms are performed, showing that ASTIPI obtains the highest accuracy of POI identication with an average accuracy of 88% when performing on full datasets generated using the GPS Auto-Sleep module and an average accuracy of 59% when performing on reduced datasets generated using both the GPS Auto-Sleep module and the Critical Points algorithm.
(C) 2014 USF, Patent Pending
The melting of the West Antarctic ice sheet (WAIS) is likely to cause a significant rise in sea levels. Studying the present state of WAIS and predicting its future behavior involves the use of computer models of ice sheet dynamics as well as observational data. I will outline general statistical challenges posed by these scientific questions and data sets.
This discussion is based on joint work with Yawen Guan (Penn State/SAMSI), Won Chang (U. of Cincinnati), Patrick Applegate, David Pollard (Penn State)
Ikuro Sato's slide presented at ICONIP2017Ikuro Sato
Ikuro Sato's slide presented at International Conference of Neural Information Processing (ICONIP) 2017. This research work proposes a new update rule for asynchronous, distributed SGD training.
Model predictive-fuzzy-control-of-air-ratio-for-automotive-enginespace130557
Automotive engine air-ratio plays an important role of
emissions and fuel consumption reduction while maintains
satisfactory engine power among all of the engine control variables.
Aerospace Engineering projects in aerodynamics, finite element analysis, math...Quoc-Bao Nguyen
Slides in aerospace engineering projects in aerodynamics, finite element analysis, aerospace system design, math modelling, heat transfer, and database modelling.
Production decline analysis is a traditional means of identifying well production problems and predicting well performance and life based on real production data. It uses empirical decline models that have little fundamental justifications. These models include
•
Exponential decline (constant fractional decline)
•
Harmonic decline, and
•
Hyperbolic decline.
Precise Attitude Determination Using a Hexagonal GPS PlatformCSCJournals
In this paper, a method of precise attitude determination using GPS is proposed. We use a hexagonal antenna platform of 1 m diameter (called the wheel) and post-processing algorithms to calculate attitude, where we focus on yaw to prove the concept. The first part of the algorithm determines an initial absolute position using single point positioning. The second part involves double differencing (DD) the carrier phase measurements for the received GPS signals to determine relative positioning of the antennas on the wheel. The third part consists of Direct Computation Method (DCM) or Implicit Least Squares (ILS) algorithms which, given sufficiently accurate knowledge of the fixed body frame coordinates of the wheel, takes in relative positions of all the receivers and produces the attitude. Field testing results presented in this paper will show that an accuracy of 0.05 degrees in yaw can be achieved. The results will be compared with a theoretical error, which is shown by Monte Carlo simulation to be < 0.001 degrees. The improvement to the current state-of-the-art is that current methods require either very large baselines of several meters to achieve such accuracy or provide errors in yaw that are orders of magnitude greater.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
Design and Implementation of Variable Radius Sphere Decoding Algorithmcsandit
Sphere Decoding (SD) algorithm is an implement deco
ding algorithm based on Zero Forcing
(ZF) algorithm in the real number field. The classi
cal SD algorithm is famous for its
outstanding Bit Error Rate (BER) performance and de
coding strategy. The algorithm gets its
maximum likelihood solution by recursive shrinking
the searching radius gradually. However, it
is too complicated to use the method of shrinking t
he searching radius in ground
communication system. This paper proposed a Variabl
e Radius Sphere Decoding (VR-SD)
algorithm based on ZF algorithm in order to simplif
y the complex searching steps. We prove the
advantages of VR-SD algorithm by analyzing from the
derivation of mathematical formulas and
the simulation of the BER performance between SD an
d VR-SD algorithm.
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
LIDAR POINT CLOUD CLASSIFICATION USING EXPECTATION MAXIMIZATION ALGORITHMijnlc
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
Variable neighborhood Prediction of temporal collective profiles by Keun-Woo ...EuroIoTa
Temporal collective profiles generated by mobile network users can be used to predict network usage, which in turn can be used to improve the performance of the network to meet user demands. This presentation will talk about a prediction method of temporal collective profiles which is suitable for online network management. Using weighted graph representation, the target sample is observed during a given period to determine a set of neighboring profiles that are considered to behave similarly enough. The prediction of the target profile is based on the weighted average of its neighbors, where the optimal number of neighbors are selected through a form of variable neighborhood search. This method is applied to two datasets, one provided by a mobile network service provider and the other from a Wi-Fi service provider. The proposed prediction method can conveniently characterize user behavior via graph representation, while outperforming existing prediction methods. Also, unlike existing methods that utilize categorization, it has a low computational complexity, which makes it suitable for online network analysis.
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...IJCSEA Journal
A hybrid learning automata–genetic algorithm (HLGA) is proposed to solve QoS routing optimization problem of next generation networks. The algorithm complements the advantages of the learning Automato Algorithm(LA) and Genetic Algorithm(GA). It firstly uses the good global search capability of LA to generate initial population needed by GA, then it uses GA to improve the Quality of Service(QoS) and acquiring the optimization tree through new algorithms for crossover and mutation operators which are an NP–Complete problem. In the proposed algorithm, the connectivity matrix of edges is used for genotype representation. Some novel heuristics are also proposed for mutation, crossover, and creation of random individuals. We evaluate the performance and efficiency of the proposed HLGA-based algorithm in comparison with other existing heuristic and GA-based algorithms by the result of simulation. Simulation results demonstrate that this paper proposed algorithm not only has the fast calculating speed and high accuracy but also can improve the efficiency in Next Generation Networks QoS routing. The proposed algorithm has overcome all of the previous algorithms in the literature..
Development of Methodology for Determining Earth Work Volume Using Combined S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
ENERGY AND LATENCY AWARE APPLICATION MAPPING ALGORITHM & OPTIMIZATION FOR HOM...cscpconf
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
On the Effect of Geometries Simplification on Geo-spatial Link Discovery
1. On the Effect of Geometries Simplification
on Geo-spatial Link Discovery
Abdullah Fathi Ahmed1, Mohamed Ahmed Sherif1,2,
and Axel-Cyrille Ngonga Ngomo1,2
1Department of Computer Science, University Paderborn, 33098 Paderborn, Germany
afaahmed@mail.uni-paderborn.de
{sherif|ngonga}@upb.de
2 Department of Computer Science, University of Leipzig, 04109 Leipzig, Germany
{sherif|ngonga}@informatik.uni-leipzig.de
12 September, 2018
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 1 / 25
3. Motivation
Link Discovery among RDF geospatial data
Linked Data Cloud
http://stats.lod2.eu
150+ billion triples
46+ million links
Mostly owl:sameAs
Large geospatial datasets
LinkedGeoData contains 20+ billion triples
CLC consists of 2+ million resources
NUTS contains up to 1500 points/resources
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 3 / 25
4. Motivation
Why is linking geospatial resources difficult?
Link Discovery
Given two knowledge bases S and T,
find links of type R between S and T
Formally find M = {(s, t) ∈ S × T : R(s, t)}
Na¨ıve computation of M requires quadratic
time complexity
Geo-spatial resources available on the LOD
Described using polygons
Large in number
Demands the computation of
1 Topological relations
2 point-set distance
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 4 / 25
5. Motivation
Why is linking of simplified geospatial data important?
Real-time applications
Structured machine learning
Cross-ontology QA
Reasoning
Federated Queries
...
The trade-off between
Runtime and
Accuracy http://www.thepinsta.com
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 5 / 25
6. Approach
I. Line Simplification
Input: Polygonized curve with n vertices
Goal: Find an approximating polygonized curve with m vertices, where m < n
Idea: Approximate a line with a defined error tolerance > 0
Many algorithms exist
Douglas-Peucker
Visvalingam-Whyatt
...
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 6 / 25
7. Approach
I. Line Simplification: Douglas-Peucker Algorithm
Constract a line segment from the first
point to the last point
Find the point with farthest distance
dmax from the line segment
If tolerance < dmax , the approximation
is accepted
otherwise, keep recursion
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 7 / 25
8. Approach
II. Topological Relations: The Dimensionally Extended nine-Intersection Model (DE-9IM)
Standard to describe the topological relations in 2D space.
DE-9IM is based on the intersection matrix:
DE9IM(a, b)
dim(I(g1) ∩ I(g2)) dim(I(g1) ∩ B(g2)) dim(I(g1) ∩ E(g2))
dim(B(g1) ∩ I(g2)) dim(B(g1) ∩ B(g2)) dim(B(g1) ∩ E(g2))
dim(E(g1) ∩ I(g2)) dim(E(g1) ∩ B(g2)) dim(E(g1) ∩ E(g2))
At least one shared point for a relation to be hold
For the disjoint relation ⇒ inverse of the intersects relation
Accelerates the computation of any topological relation
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 8 / 25
9. Approach
III. Distance Measures for Point Sets
Input Two resources with input geometries gs and gt
Compute the orthodromic distance δ(si , tj ) between pairwise point of gs and gt
δ(si , tj ) = R cos−1
sin(ϕsi ) sin(ϕtj ) + cos(ϕsi ) cos(ϕtj ) cos(λsi − λtj ),
pi is a point on the surface (ϕi , λi ), latitude ϕi and longitude λi ,
Many methods exist to compute the (gs, gt) point-set distance
Hausdorff
DHausdorff (gs, gt) = max
si ∈gs
min
tj ∈gt
δ(si , tj )
Mean
Dmean(gs, gt) = δ
1
n si ∈gs
si ,
1
m tj ∈gt
tj
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 9 / 25
10. Evaluation
Overview
Line Simplification is independent from Link Discovery framework
Stat of the art:
Radon for topological relation extraction
Orchid for point set distance
Topological relations:
within, touches, overlaps, intersects, equals, crosses and covers
Point-sets distance:
Hausdorff, Mean, Link, Min, and Sumofmin
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 10 / 25
12. Evaluation
Setup
Hardware
Oculus a cluster machine running OpenJDK 64-Bit 1.8.0161 on Ubuntu 16.04.3 LTS
Assigned 16 CPU (2.6 GHz Intel Xeon ”Sandy Bridge”) and 200 GB of RAM with timeout
limit of 4 hours for each job
Datasets
NUTS
CORINE Land Cover (CLC)
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 12 / 25
13. Evaluation
F-Measure Analysis
Q1
How much performance (i.e., F-measure) each of the geospatial LD approaches loses, when to
deal with the simplified geometries vs. when to deal with the original ones?
The first set of experiments setup
Radon for discovering topological relations
Douglas-Peucker with simplification factors of 0.05, 0.09, 0.10 and 0.2
NUTS dataset as a source and CLC dataset as a target
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 13 / 25
15. Evaluation
F-Measure Analysis
Q1
How much performance (i.e., F-measure) each of the geospatial LD approaches loses, when to
deal with the simplified geometries vs. when to deal with the original ones?
The second set of experiments setup:
Orchid for measuring the distance between point-sets
Douglas-Peucker with simplification factors of 0.05, 0.09, 0.10 and 0.2
NUTS dataset dedublicated to compute F-measure
NUTS dataset as a source and as a target (deduplication)
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 15 / 25
16. Evaluation
F-Measure Analysis
Measure/Factor 0.05 0.9 0.1 0.2 0.3 Average Foriginal
Hausdorff 0.90 0.91 0.91 0.91 0.91 0.91 ± 0.00 0.88
Mean 0.94 0.94 0.94 0.94 0.94 0.94 ± 0.00 0.94
Min 0.14 0.16 0.16 0.21 0.25 0.18 ± 0.04 0.13
Link 0.95 0.95 0.94 0.94 0.94 0.94 ± 0.00 0.94
SumOfMin 0.95 0.95 0.94 0.94 0.94 0.94 ± 0.00 0.94
avarege 0.77 ± 0.36 0.78 ± 0.35 0.78 ± 0.35 0.79 ± 0.32 0.80 ± 0.31 0.77 ± 0.36
F-measures results of the point-set distance measures implementations in Orchid using the
Douglas-Peucker line simplification algorithm.
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 16 / 25
17. Evaluation
Runtime Analysis
Q2
How well each of the geospatial LD approaches scale (i.e., runtime speedup), and when to deal
with the simplified geometries?
The third sets of experiments setup:
Same setting as in the first sets of experiments
Radon for discovering topological relations
Douglas-Peucker with simplification factors of 0.05, 0.09, 0.10 and 0.2
NUTS dataset as a source and CLC dataset as a target
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 17 / 25
18. Evaluation
Runtime Analysis
0
50
100
150
200
250
300
350
400
450
Equals Intersects Contains Within Covers CoveredBy Overlaps Touches Crosses
Runtime(sec.)
Topological relations
Original dataset 0.05 0.09 0.1 0.2
Runtimes of Radon’s implementation of topological relations LD using the Douglas-Peucker
algorithm
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 18 / 25
19. Evaluation
Runtime Analysis
Q2
How well each of the geospatial LD approaches scale (i.e., runtime speedup), and when to deal
with the simplified geometries?
The fourth set of experiments:
Same setting in the second sets of experiments
Orchid for measuring the distance between point-sets
Douglas-Peucker with simplification factors of 0.05, 0.09, 0.10 and 0.2
NUTS dataset dedublicated to compute F-measure
NUTS dataset as a source and as a target (deduplication)
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 19 / 25
21. Evaluation
LD Relations Analysis
Q3
Which relation is the most/least affected by the simplification process?
Same setting in the first and second sets of experiments
The relation overlaps has the most affected F-measure when using Douglas-Peucker
algorithm, (see Table 14)
The equals,crosses and touches are not affected at all by any simplification (see
Tables 14
In the case of point-set measures, the F-measure of min measure is the most affected,
(see Table 16 )
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 21 / 25
22. Evaluation
Simplification Runtime Analysis
Q4
What is the run time cost of simplification?
Same setting in the third and fourth sets of experiments
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 22 / 25
23. Evaluation
Simplification Runtime Analysis
0
500
1000
1500
2000
2500
3000
3500
Simplification RADONon Simpl. RADON+ Simpl. RADONon original
Totalruntime(sec.)
Total Runtime for all topological relations.
0
50
100
150
200
250
300
350
400
450
500
Simplification RADONon Simpl. RADON+ Simpl. RADONon original
Averageruntimeforsingletask(sec.)
Average Runtime of a single topological relation.
Runtime of Radon on the original data vs. simplified data using the Douglas-Peucker
algorithm.
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 23 / 25
24. Conclusion & Future Work
Conclusion
Studied the usage of line simplification as a preprocessing step of LD approaches over
geospatial RDF knowledge bases
Studied the behaviour of two categories of geospatial linking approaches (i.e., the topological
relations and point-set distances)
On average, F-measure of 0.94 using the Douglas-Peucker algorithm and 0.69 using the
Visvalingam-Whyatt algorithm has been achieved.
Gain up to 19.8× speedup using Douglas-Peucker algorithm and up to 67.3× using the
Visvalingam-Whyatt
Future Work
Guarantee the minimum F-measure loss
Determine fittest line simplification algorithm and its best parameter to achieve a better
trad-off between F-measure and Runtime
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 24 / 25
25. Thank you for your Attention!
Abdullah Fathi Ahmed
afaahmed@mail.uni-paderborn.de
https://github.com/dice-group/Orchid
Acknowledgment
This work has been supported by LEDS project, Eurostars Project SAGE (GA no. E!10882), the H2020 projects SLIPO
(GA no. 731581) and HOBBIT (GA no. 688227) as well as the DFG project LinkingLOD (project no. NG 105/3-2) and
the BMWI Project GEISER (project no. 01MD16014).
Ahmed, Sherif and Ngonga GeoSimp. 12 September, 2018 25 / 25