This document presents a hybrid deep learning approach for vertex finding using 1D convolutional neural networks. It describes generating 1D kernel densities from tracking information, building target distributions, and using a CNN architecture with an adjustable cost function to optimize the false positive rate versus efficiency. The approach achieves 93.87% efficiency with a 0.251 false positive rate on test data. Future work includes incorporating additional xy information and exploring full 2D kernel densities.
ACAT 2019: A hybrid deep learning approach to vertexingHenry Schreiner
This document presents a hybrid deep learning approach for vertex finding in high-energy physics experiments. It uses a 1D convolutional neural network to analyze kernel density estimates of track information in order to identify primary vertex positions. The approach achieves primary vertex finding efficiencies of 88-94% with low false positive rates comparable to traditional algorithms. The authors demonstrate tuning of the efficiency-false positive rate tradeoff and discuss plans to improve performance by incorporating additional track information and iterative refinement.
HOW 2019: Machine Learning for the Primary Vertex ReconstructionHenry Schreiner
The document describes a machine learning approach for primary vertex reconstruction in high-energy physics experiments. A hybrid method is proposed that uses a 1D convolutional neural network to analyze histograms produced from tracking data. The network is able to find primary vertices with high efficiency and tunable false positive rates, demonstrating the potential of machine learning for this task. Future work involves adding more tracking information and iterating between track association and vertex finding to improve performance.
LHCb Computing Workshop 2018: PV finding with CNNsHenry Schreiner
The document discusses using a convolutional neural network (CNN) to quickly find primary vertices (PVs) in high-energy physics events recorded by the LHCb experiment. A prototype tracking algorithm is used to generate a 1D kernel density estimate (KDE) histogram from hit triplets. This histogram is then used to train a CNN to predict the locations of PVs. Initial results show the CNN approach can find PVs with 70-75% efficiency and a false positive rate of 0.08-0.13, outperforming current algorithms. Further work aims to improve resolution, find secondary vertices, and integrate the approach into iterative tracking.
Evaluation of geometrical parameters of buildings from SAR imagesFederico Ariu
This document discusses evaluating geometrical parameters of buildings from SAR images. It presents an algorithm developed to retrieve building orientation, height, and double scattering lines. SAR imaging pros and cons are explained. Electromagnetic scattering models for single and double scattering are described. Simulations are shown for 512x512 and ERS-1 C sensors to test the algorithm under different building orientations. The algorithm uses linear regression to determine building orientation from double scattering lines. Conclusions state the algorithm can accurately retrieve parameters given sufficient dots on scattering lines but the range of evaluable angles is limited.
Using Very High Resolution Satellite Images for Planning Activities in MiningArgongra Gis
Pleiades satellite imagery can be used to generate high-resolution digital elevation models, contour lines, and 3D models for mining sector planning activities. A case study acquired 240 sq km of Pleiades stereo imagery with less than 5% cloud cover to generate a 5m DEM and detailed contour lines and 3D visualizations. The Pleiades data provided more accurate topographic information than existing SRTM data for environmental studies, monitoring mining volume changes, and other planning purposes in the mining sector.
This document proposes a generalized division-free architecture and compact memory structure for resampling in particle filters. It aims to avoid the high hardware cost of traditional multinomial resampling by using accumulators and comparators instead of division and normalization. The architecture is independent of the number of particles and can be used for different resampling methods. Memory usage is optimized by accumulating weights and random numbers on-the-fly instead of storing cumulative sums, reducing area by up to 45% and memory usage by up to 50%. The architecture achieves resampling without ordering, normalization or generating ordered random numbers.
This document discusses vertical axis wind turbine (VAWT) performance prediction theories and a single stream tube model for VAWTs. It presents the angle of attack equation for the single stream tube model assuming zero tilt angle. It also provides the iterative equation used to determine the induction factor and the power equation that is integrated over a full rotation to predict average power. The document indicates that understanding and integrating various engineering disciplines is required to make meaningful contributions to wind turbine research and development.
This document provides information about power patterns in Colorado during winter, wind turbine performance prediction theories, and the fields required to contribute to wind energy technology. It summarizes:
1) A typical winter day in Colorado sees power demand peak in the morning and evening with lower demand during the day, while wind power generation is highest during the night and midday.
2) Common theories for predicting vertical axis wind turbine performance include single stream tube, multiple stream tube, and fixed wake models.
3) Developing contributions to wind energy requires expertise in many disciplines including fluid dynamics, materials science, engineering, programming, and business/policy areas.
ACAT 2019: A hybrid deep learning approach to vertexingHenry Schreiner
This document presents a hybrid deep learning approach for vertex finding in high-energy physics experiments. It uses a 1D convolutional neural network to analyze kernel density estimates of track information in order to identify primary vertex positions. The approach achieves primary vertex finding efficiencies of 88-94% with low false positive rates comparable to traditional algorithms. The authors demonstrate tuning of the efficiency-false positive rate tradeoff and discuss plans to improve performance by incorporating additional track information and iterative refinement.
HOW 2019: Machine Learning for the Primary Vertex ReconstructionHenry Schreiner
The document describes a machine learning approach for primary vertex reconstruction in high-energy physics experiments. A hybrid method is proposed that uses a 1D convolutional neural network to analyze histograms produced from tracking data. The network is able to find primary vertices with high efficiency and tunable false positive rates, demonstrating the potential of machine learning for this task. Future work involves adding more tracking information and iterating between track association and vertex finding to improve performance.
LHCb Computing Workshop 2018: PV finding with CNNsHenry Schreiner
The document discusses using a convolutional neural network (CNN) to quickly find primary vertices (PVs) in high-energy physics events recorded by the LHCb experiment. A prototype tracking algorithm is used to generate a 1D kernel density estimate (KDE) histogram from hit triplets. This histogram is then used to train a CNN to predict the locations of PVs. Initial results show the CNN approach can find PVs with 70-75% efficiency and a false positive rate of 0.08-0.13, outperforming current algorithms. Further work aims to improve resolution, find secondary vertices, and integrate the approach into iterative tracking.
Evaluation of geometrical parameters of buildings from SAR imagesFederico Ariu
This document discusses evaluating geometrical parameters of buildings from SAR images. It presents an algorithm developed to retrieve building orientation, height, and double scattering lines. SAR imaging pros and cons are explained. Electromagnetic scattering models for single and double scattering are described. Simulations are shown for 512x512 and ERS-1 C sensors to test the algorithm under different building orientations. The algorithm uses linear regression to determine building orientation from double scattering lines. Conclusions state the algorithm can accurately retrieve parameters given sufficient dots on scattering lines but the range of evaluable angles is limited.
Using Very High Resolution Satellite Images for Planning Activities in MiningArgongra Gis
Pleiades satellite imagery can be used to generate high-resolution digital elevation models, contour lines, and 3D models for mining sector planning activities. A case study acquired 240 sq km of Pleiades stereo imagery with less than 5% cloud cover to generate a 5m DEM and detailed contour lines and 3D visualizations. The Pleiades data provided more accurate topographic information than existing SRTM data for environmental studies, monitoring mining volume changes, and other planning purposes in the mining sector.
This document proposes a generalized division-free architecture and compact memory structure for resampling in particle filters. It aims to avoid the high hardware cost of traditional multinomial resampling by using accumulators and comparators instead of division and normalization. The architecture is independent of the number of particles and can be used for different resampling methods. Memory usage is optimized by accumulating weights and random numbers on-the-fly instead of storing cumulative sums, reducing area by up to 45% and memory usage by up to 50%. The architecture achieves resampling without ordering, normalization or generating ordered random numbers.
This document discusses vertical axis wind turbine (VAWT) performance prediction theories and a single stream tube model for VAWTs. It presents the angle of attack equation for the single stream tube model assuming zero tilt angle. It also provides the iterative equation used to determine the induction factor and the power equation that is integrated over a full rotation to predict average power. The document indicates that understanding and integrating various engineering disciplines is required to make meaningful contributions to wind turbine research and development.
This document provides information about power patterns in Colorado during winter, wind turbine performance prediction theories, and the fields required to contribute to wind energy technology. It summarizes:
1) A typical winter day in Colorado sees power demand peak in the morning and evening with lower demand during the day, while wind power generation is highest during the night and midday.
2) Common theories for predicting vertical axis wind turbine performance include single stream tube, multiple stream tube, and fixed wake models.
3) Developing contributions to wind energy requires expertise in many disciplines including fluid dynamics, materials science, engineering, programming, and business/policy areas.
This document discusses different sampling techniques used in population surveys including simple random sampling, systematic sampling, stratified sampling, cluster sampling, convenience sampling, and quota sampling. It provides examples of how to calculate the sample size for each technique and notes the advantages and disadvantages of each.
Neighbourhood Preserving Quantisation for LSH SIGIR PosterSean Moran
This document proposes a neighbourhood preserving quantisation (NPQ) method for locality sensitive hashing (LSH) that assigns multiple bits per hyperplane using multiple thresholds, rather than the standard single bit. The NPQ method optimizes an F1 score using pairwise constraints from training data to determine threshold values. Evaluation on image retrieval tasks shows NPQ consistently outperforms single and double bit baselines across different projection methods, achieving higher precision-recall curves, especially at higher bit rates. Future work includes exploring variable bits per hyperplane and full retrieval evaluations.
This document summarizes a street lighting project including:
1) Planning data for a single roadway with details on the luminaire, arrangement, and mounting specifications.
2) A luminaire parts list specifying the model number, luminous flux, and wattage.
3) Renderings showing the false color lighting distribution.
4) Valuation field details for the roadway including isolines, average lux, minimum lux, maximum lux, and uniformity ratios.
GoogLeNet introduced several key insights for designing efficient deep learning networks:
1. Exploit local correlations in images by concatenating 1x1, 3x3, and 5x5 convolutions along with pooling.
2. Decrease dimensions before expensive convolutions using 1x1 convolutions for dimension reduction.
3. Stack inception modules upon each other, occasionally inserting max pooling layers, to allow tweaking each module.
4. Counter vanishing gradients with intermediate losses added to the total loss for training deep networks.
5. End with a global average pooling layer instead of fully connected layers to avoid overfitting.
The document discusses the challenges of ground-based astronomical array imaging at far-infrared wavelengths. It covers topics such as data reduction techniques like direct mapping and iterative map-making methods. Scanning strategies that provide noise resistance, large-scale sensitivity, and coverage are explored through simulations. Common strategies like on-the-fly scanning, Lissajous patterns, billiard scans, and spirals are analyzed and compared. Examples of real observations using these techniques are also presented. The document emphasizes that careful consideration of both data reduction methods and scanning strategies is needed to produce high-quality images from ground-based submillimeter arrays.
This document discusses direction of arrival (DOA) estimation using a two-element antenna array. It describes simulating different radiation patterns from varying the phase between antennas. Randomly located nodes are generated and their received signal strength calculated using a two-ray model for different radiation patterns. The pattern with the highest RSS value for a node indicates the most likely region it is located in, allowing estimation of each node's DOA. While results show this method can determine DOA, more research is needed to narrow estimates.
Adaptive Channel Prediction, Beamforming and Scheduling Design for 5G V2I Net...T. E. BOGALE
The document proposes and evaluates an adaptive channel prediction, beamforming, and scheduling design for 5G vehicle-to-infrastructure networks. It presents an RLS-based algorithm to predict time-varying channel impulse responses and jointly optimizes beamforming vectors and vehicle scheduling to maximize throughput. Simulation results show the proposed design outperforms alternatives when scheduling a single vehicle, but performance degrades with increasing numbers of scheduled vehicles due to accumulated prediction errors.
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...Ubi NAIST
This document proposes algorithms to select mobile sensor nodes for probabilistic coverage in people-centric sensing applications. It formulates the problem of ensuring an area of interest (AOI) is covered with probability α within time T. Two algorithms are presented: Inter-Location Based (ILB) selects nodes far apart, and Inter-Meeting Time Based (IMTB) selects nodes whose expected meeting time is late. Simulation results show IMTB selects fewer nodes than ILB while maintaining coverage, especially for larger AOIs, T, and node populations. Future work includes updating the node selection and extending the selection area.
Presentation at ISSDQ'15 (La Grande-Motte, France) on using image-based clutter methods for assessing the complexity of generalized maps or maps at different scales.
1) The document describes a real-time GPU implementation of visual smoke simulation using the incompressible Navier-Stokes equations.
2) Key steps in the simulation algorithm include adding forces, advecting velocity and scalar fields, solving for pressure, projecting the velocity field, and applying boundary conditions.
3) Volume rendering is achieved by slicing the 3D grid from the viewer's perspective and compositing the slices using the "under" operator, implementing shadows using half-angle slicing.
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
1) Given a sequence of stereo images, the pipeline generates a dense 3D semantic model of the urban environment.
2) Depth maps are generated from stereo images and fused into a volumetric representation using camera poses from feature tracking.
3) Semantic segmentation of street view images is done using a CRF model, and labels are projected onto the 3D model faces to generate the semantic model.
4) The semantic model is evaluated by projecting it back to the input images and calculating metrics like recall and intersection over union. Future work includes real-time implementation and combining image and geometric context.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/introduction-to-simultaneous-localization-and-mapping-slam-a-presentation-from-gareth-cross/
Independent game developer (and former technical lead of state estimation at Skydio) Gareth Cross presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the May 2021 Embedded Vision Summit.
This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross aims to provide foundational knowledge, and viewers are not expected to have any prerequisite experience in the field.
The talk consists of an introduction to the concept of SLAM, as well as practical design considerations in formulating SLAM problems. Visual inertial odometry is introduced as a motivating example of SLAM, and Cross explains how this problem is structured and solved.
Сегментация объектов на спутниковых снимках (Kaggle DSTL) / Артур Кузин (Avito)Ontico
РИТ++ 2017, секция ML + IoT + ИБ
Зал Белу-Оризонти, 6 июня, 16:00
Тезисы:
http://ritfest.ru/2017/abstracts/2802.html
В докладе я расскажу про решение задачи сегментации объектов на спутниковых снимках, которая была поставлена в рамках Kaggle-соревнования Dstl Satellite Imagery Feature Detection. В этом соревновании я в команде с Романом Соловьёвым занял 2 место.
В докладе я кратко опишу особенности работы нейросети для сегментации объектов. Затем будут показаны примеры модификаций нейросети с учетом особенности задачи. Также будут рассказаны приемы обучения нейросети, значимо повышающие финальную точность. Будут рассказаны все топ-5 решения.
В качестве бонуса - история, как можно сломать лидерборд за пару дней до конца соревнования.
This document outlines a project to visually inspect wind turbine blades using drones and artificial intelligence. It defines the problem of creating composite images from drone photos of blades on land and offshore. The proposed solution is to use a cross-correlation algorithm to combine images with 2500px and 3500px overlaps for on-land and offshore blades respectively. The initial results from this algorithm are promising, and future work involves expanding the algorithm to handle vertical shifts and using deep learning on an image database of offshore wind turbines.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Large scale landuse classification of satellite imagerySuneel Marthi
This document summarizes a presentation on classifying land use from satellite imagery. It describes using a neural network to filter out cloudy images, segmenting images with a U-Net model to identify tulip fields, and implementing the workflow with Apache Beam for inference on new images. Examples are shown of detecting large and small tulip fields. Future work proposed includes classifying rock formations using infrared bands and measuring crop health.
Landuse Classification from Satellite Imagery using Deep LearningDataWorks Summit
With the abundance of remote sensing satellite imagery, the possibilities are endless as to the kind of insights that can be derived from them. One such use is to determine land use for agriculture and non-agricultural purposes.
In this talk, we’ll be looking at leveraging Sentinel-2 satellite imagery data along with OpenStreetMap labels to be able to classify land use as agricultural or non-agricultural.
Sentinel-2 data has a 10-meter resolution in RGB bands and is well-suited for land use classification. Using these two datasets, many different machine learning tasks can be performed like image segmentation into two classes (farm land and non-farm land) or more challenging task of identification of crop type being cultivated on fields.
For this talk, we’ll be looking at leveraging convolutional neural networks (CNNs) built with Apache MXNet to train deep learning models for land use classification. We’ll be covering the different deep learning architectures considered for this particular use case along with the appropriate metrics.
We’ll be leveraging streaming pipelines built on Apache Flink and Apache NiFi for model training and inference. Developers will come away with a better understanding of how to analyze satellite imagery and the different deep learning architectures along with their pros/cons when analyzing satellite imagery for land use. SUNEEL MARTHI and CHRIS OLIVIER, Software Development Engineer Amazon Web Services
DSD-INT 2015 - 3Di pilot application in Taiwan - Jhih-Cyuan Shen, Geert PrinsenDeltares
3Di is a flood modeling software that allows for fast and accurate modeling using detailed elevation data. It allows calculations to be done interactively in the cloud on any device. The document discusses pilots of 3Di modeling in Taiwan, including applications in Meifu and Sanyei areas. It proposes various ways 3Di could be coupled with FEWS-Taiwan, an existing flood early warning system, including running 3Di models standalone or in the cloud driven by FEWS input and measures, and presenting 3Di results within FEWS or on a live 3Di site. Coupling 3Di with FEWS could combine their respective strengths while addressing challenges of running models interactively in the cloud.
2019 IML workshop: A hybrid deep learning approach to vertexingHenry Schreiner
A hybrid deep learning approach is proposed for vertex finding using 1D convolutional neural networks on kernel density estimates from tracking data. The approach generates 1D histograms from 3D tracking data and uses a CNN to classify primary vertex positions. In a proof-of-concept on simulated data, it achieves primary vertex finding efficiencies and false positive rates comparable to traditional algorithms, with tunable efficiency-false positive tradeoffs. Future work includes incorporating additional tracking features, associating tracks to vertices, and deploying the inference engine for the LHCb trigger.
A deep learning model using convolutional neural networks is proposed for lithography hotspot detection. The model takes layout clip images as input and outputs a prediction of hotspot or non-hotspot. It uses several convolutional and pooling layers to automatically learn features from the images without manual feature engineering. Evaluation shows the deep learning model achieves higher accuracy than previous shallow learning methods that rely on manually designed features.
The document discusses enhancements to the generalized sidelobe canceller (GSC) algorithm for audio beamforming. It presents:
1) Amplitude scaling models to account for near-field effects, including a 1/r model, acoustic physics model, and statistical model. Experimental results found the acoustic physics model provided a small improvement.
2) Automatic target alignment using cross-correlation and a threshold, but experimental results found this did not improve performance over the standard GSC due to low signal-to-noise ratios.
3) Analysis of different array geometries using beamfield plots and simulations, finding array geometry has the largest impact on performance and random arrays have potential if well described.
This document discusses different sampling techniques used in population surveys including simple random sampling, systematic sampling, stratified sampling, cluster sampling, convenience sampling, and quota sampling. It provides examples of how to calculate the sample size for each technique and notes the advantages and disadvantages of each.
Neighbourhood Preserving Quantisation for LSH SIGIR PosterSean Moran
This document proposes a neighbourhood preserving quantisation (NPQ) method for locality sensitive hashing (LSH) that assigns multiple bits per hyperplane using multiple thresholds, rather than the standard single bit. The NPQ method optimizes an F1 score using pairwise constraints from training data to determine threshold values. Evaluation on image retrieval tasks shows NPQ consistently outperforms single and double bit baselines across different projection methods, achieving higher precision-recall curves, especially at higher bit rates. Future work includes exploring variable bits per hyperplane and full retrieval evaluations.
This document summarizes a street lighting project including:
1) Planning data for a single roadway with details on the luminaire, arrangement, and mounting specifications.
2) A luminaire parts list specifying the model number, luminous flux, and wattage.
3) Renderings showing the false color lighting distribution.
4) Valuation field details for the roadway including isolines, average lux, minimum lux, maximum lux, and uniformity ratios.
GoogLeNet introduced several key insights for designing efficient deep learning networks:
1. Exploit local correlations in images by concatenating 1x1, 3x3, and 5x5 convolutions along with pooling.
2. Decrease dimensions before expensive convolutions using 1x1 convolutions for dimension reduction.
3. Stack inception modules upon each other, occasionally inserting max pooling layers, to allow tweaking each module.
4. Counter vanishing gradients with intermediate losses added to the total loss for training deep networks.
5. End with a global average pooling layer instead of fully connected layers to avoid overfitting.
The document discusses the challenges of ground-based astronomical array imaging at far-infrared wavelengths. It covers topics such as data reduction techniques like direct mapping and iterative map-making methods. Scanning strategies that provide noise resistance, large-scale sensitivity, and coverage are explored through simulations. Common strategies like on-the-fly scanning, Lissajous patterns, billiard scans, and spirals are analyzed and compared. Examples of real observations using these techniques are also presented. The document emphasizes that careful consideration of both data reduction methods and scanning strategies is needed to produce high-quality images from ground-based submillimeter arrays.
This document discusses direction of arrival (DOA) estimation using a two-element antenna array. It describes simulating different radiation patterns from varying the phase between antennas. Randomly located nodes are generated and their received signal strength calculated using a two-ray model for different radiation patterns. The pattern with the highest RSS value for a node indicates the most likely region it is located in, allowing estimation of each node's DOA. While results show this method can determine DOA, more research is needed to narrow estimates.
Adaptive Channel Prediction, Beamforming and Scheduling Design for 5G V2I Net...T. E. BOGALE
The document proposes and evaluates an adaptive channel prediction, beamforming, and scheduling design for 5G vehicle-to-infrastructure networks. It presents an RLS-based algorithm to predict time-varying channel impulse responses and jointly optimizes beamforming vectors and vehicle scheduling to maximize throughput. Simulation results show the proposed design outperforms alternatives when scheduling a single vehicle, but performance degrades with increasing numbers of scheduled vehicles due to accumulated prediction errors.
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...Ubi NAIST
This document proposes algorithms to select mobile sensor nodes for probabilistic coverage in people-centric sensing applications. It formulates the problem of ensuring an area of interest (AOI) is covered with probability α within time T. Two algorithms are presented: Inter-Location Based (ILB) selects nodes far apart, and Inter-Meeting Time Based (IMTB) selects nodes whose expected meeting time is late. Simulation results show IMTB selects fewer nodes than ILB while maintaining coverage, especially for larger AOIs, T, and node populations. Future work includes updating the node selection and extending the selection area.
Presentation at ISSDQ'15 (La Grande-Motte, France) on using image-based clutter methods for assessing the complexity of generalized maps or maps at different scales.
1) The document describes a real-time GPU implementation of visual smoke simulation using the incompressible Navier-Stokes equations.
2) Key steps in the simulation algorithm include adding forces, advecting velocity and scalar fields, solving for pressure, projecting the velocity field, and applying boundary conditions.
3) Volume rendering is achieved by slicing the 3D grid from the viewer's perspective and compositing the slices using the "under" operator, implementing shadows using half-angle slicing.
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
1) Given a sequence of stereo images, the pipeline generates a dense 3D semantic model of the urban environment.
2) Depth maps are generated from stereo images and fused into a volumetric representation using camera poses from feature tracking.
3) Semantic segmentation of street view images is done using a CRF model, and labels are projected onto the 3D model faces to generate the semantic model.
4) The semantic model is evaluated by projecting it back to the input images and calculating metrics like recall and intersection over union. Future work includes real-time implementation and combining image and geometric context.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/introduction-to-simultaneous-localization-and-mapping-slam-a-presentation-from-gareth-cross/
Independent game developer (and former technical lead of state estimation at Skydio) Gareth Cross presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the May 2021 Embedded Vision Summit.
This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross aims to provide foundational knowledge, and viewers are not expected to have any prerequisite experience in the field.
The talk consists of an introduction to the concept of SLAM, as well as practical design considerations in formulating SLAM problems. Visual inertial odometry is introduced as a motivating example of SLAM, and Cross explains how this problem is structured and solved.
Сегментация объектов на спутниковых снимках (Kaggle DSTL) / Артур Кузин (Avito)Ontico
РИТ++ 2017, секция ML + IoT + ИБ
Зал Белу-Оризонти, 6 июня, 16:00
Тезисы:
http://ritfest.ru/2017/abstracts/2802.html
В докладе я расскажу про решение задачи сегментации объектов на спутниковых снимках, которая была поставлена в рамках Kaggle-соревнования Dstl Satellite Imagery Feature Detection. В этом соревновании я в команде с Романом Соловьёвым занял 2 место.
В докладе я кратко опишу особенности работы нейросети для сегментации объектов. Затем будут показаны примеры модификаций нейросети с учетом особенности задачи. Также будут рассказаны приемы обучения нейросети, значимо повышающие финальную точность. Будут рассказаны все топ-5 решения.
В качестве бонуса - история, как можно сломать лидерборд за пару дней до конца соревнования.
This document outlines a project to visually inspect wind turbine blades using drones and artificial intelligence. It defines the problem of creating composite images from drone photos of blades on land and offshore. The proposed solution is to use a cross-correlation algorithm to combine images with 2500px and 3500px overlaps for on-land and offshore blades respectively. The initial results from this algorithm are promising, and future work involves expanding the algorithm to handle vertical shifts and using deep learning on an image database of offshore wind turbines.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Large scale landuse classification of satellite imagerySuneel Marthi
This document summarizes a presentation on classifying land use from satellite imagery. It describes using a neural network to filter out cloudy images, segmenting images with a U-Net model to identify tulip fields, and implementing the workflow with Apache Beam for inference on new images. Examples are shown of detecting large and small tulip fields. Future work proposed includes classifying rock formations using infrared bands and measuring crop health.
Landuse Classification from Satellite Imagery using Deep LearningDataWorks Summit
With the abundance of remote sensing satellite imagery, the possibilities are endless as to the kind of insights that can be derived from them. One such use is to determine land use for agriculture and non-agricultural purposes.
In this talk, we’ll be looking at leveraging Sentinel-2 satellite imagery data along with OpenStreetMap labels to be able to classify land use as agricultural or non-agricultural.
Sentinel-2 data has a 10-meter resolution in RGB bands and is well-suited for land use classification. Using these two datasets, many different machine learning tasks can be performed like image segmentation into two classes (farm land and non-farm land) or more challenging task of identification of crop type being cultivated on fields.
For this talk, we’ll be looking at leveraging convolutional neural networks (CNNs) built with Apache MXNet to train deep learning models for land use classification. We’ll be covering the different deep learning architectures considered for this particular use case along with the appropriate metrics.
We’ll be leveraging streaming pipelines built on Apache Flink and Apache NiFi for model training and inference. Developers will come away with a better understanding of how to analyze satellite imagery and the different deep learning architectures along with their pros/cons when analyzing satellite imagery for land use. SUNEEL MARTHI and CHRIS OLIVIER, Software Development Engineer Amazon Web Services
DSD-INT 2015 - 3Di pilot application in Taiwan - Jhih-Cyuan Shen, Geert PrinsenDeltares
3Di is a flood modeling software that allows for fast and accurate modeling using detailed elevation data. It allows calculations to be done interactively in the cloud on any device. The document discusses pilots of 3Di modeling in Taiwan, including applications in Meifu and Sanyei areas. It proposes various ways 3Di could be coupled with FEWS-Taiwan, an existing flood early warning system, including running 3Di models standalone or in the cloud driven by FEWS input and measures, and presenting 3Di results within FEWS or on a live 3Di site. Coupling 3Di with FEWS could combine their respective strengths while addressing challenges of running models interactively in the cloud.
2019 IML workshop: A hybrid deep learning approach to vertexingHenry Schreiner
A hybrid deep learning approach is proposed for vertex finding using 1D convolutional neural networks on kernel density estimates from tracking data. The approach generates 1D histograms from 3D tracking data and uses a CNN to classify primary vertex positions. In a proof-of-concept on simulated data, it achieves primary vertex finding efficiencies and false positive rates comparable to traditional algorithms, with tunable efficiency-false positive tradeoffs. Future work includes incorporating additional tracking features, associating tracks to vertices, and deploying the inference engine for the LHCb trigger.
A deep learning model using convolutional neural networks is proposed for lithography hotspot detection. The model takes layout clip images as input and outputs a prediction of hotspot or non-hotspot. It uses several convolutional and pooling layers to automatically learn features from the images without manual feature engineering. Evaluation shows the deep learning model achieves higher accuracy than previous shallow learning methods that rely on manually designed features.
The document discusses enhancements to the generalized sidelobe canceller (GSC) algorithm for audio beamforming. It presents:
1) Amplitude scaling models to account for near-field effects, including a 1/r model, acoustic physics model, and statistical model. Experimental results found the acoustic physics model provided a small improvement.
2) Automatic target alignment using cross-correlation and a threshold, but experimental results found this did not improve performance over the standard GSC due to low signal-to-noise ratios.
3) Analysis of different array geometries using beamfield plots and simulations, finding array geometry has the largest impact on performance and random arrays have potential if well described.
Decision Forests and discriminant analysispotaters
This document summarizes a tutorial on randomised decision forests and tree-structured algorithms. It discusses how tree-based algorithms like boosting and random forests can be used for tasks like object detection, tracking and segmentation. It also describes techniques for speeding up computation, such as converting boosted classifiers to decision trees and using multiple classifier systems. The tutorial is structured in two parts, covering tree-structured algorithms and randomised forests.
Dream3D and its Extension to Abaqus Input FilesMatthew Priddy
This presentation is an overview of our current usage of Dream3D for generating digital microstructures from 2D EBSD scan data, particularly grain size distribution, misorientation distribution, and pole figures.
This presentation also mentions our plan for harnessing the Dream3D output formats to generate Abaqus input files (.inp).
This document describes the Illinois Scan Architecture, a technique for reducing test costs for chips with scan designs. It works by dividing the main scan chain into multiple parallel internal chains, with a single scan input pin. This allows test vectors to be broadcast to all chains simultaneously, reducing test time and data volume by the number of chains with little impact on fault coverage. The document provides experimental data showing reductions in test vectors, cycles, and data volume for several ISCAS circuits using Illinois Scan. It also discusses techniques for further optimizing the technique, such as grouping chains intelligently to minimize the number of scan input pins needed.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-yu
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chen-Ping Yu, Co-founder and CEO of Phiar, presents the "Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms" tutorial at the May 2019 Embedded Vision Summit.
Separable convolutions are an important technique for implementing efficient convolutional neural networks (CNNs), made popular by MobileNet’s use of depthwise separable convolutions. But separable convolutions are not a new concept, and their utility is not limited to CNNs. Separable convolutions have been widely studied and employed in classical computer vision algorithms as well, in order to reduce computation demands.
We begin this talk with an introduction to separable convolutions. We then explore examples of their application in classical computer vision algorithms and in efficient CNNs, comparing some recent neural network models. We also examine practical considerations of when and how to best utilize separable convolutions in order to maximize their benefits.
The first MEMS Gyroscope from Maxim Integrated in the industry smallest 3x3mm package
Following the purchase of MEMS manufacturer SensorDynamics in 2011, Maxim releases its first MEMS Gyroscope reference with a very accurate and cost effective component.
The MAX21000 continues to use the PSM-X2 process jointly developed by SensorDynamics and the Fraunhofer Institute for Silicon Technology. This technology platform includes a proprietary surface micromachining process to build the mechanical structures and a gold silicon eutectic wafer bonding allowing an hermetic encapsulation of the gyro sensor.
Assembled in a LGA 3.0x3.0x0.9mm package, the MAX21000 is a low power consumption (5.4mA) and high accuracy 3-axis gyroscope targeted for mobile applications.
The report is including a detailed technical and cost comparison with state of the art 3x3mm MEMS gyros from STMicroelectronics, Bosch Sensortec and InvenSense. Surprisingly, Maxim is able to provide a very competitive component due to an important silicon are a reduction.
Discover all the details in the report: http://www.i-micronews.com/reports/Maxim-Integrated-MAX21000-3-Axis-MEMS-Gyroscope/1/449/
Title: Detecting Potential Biases in Sequential Hand Gesture Recognition
This slide deck showcases my master's thesis, delving into the exploration of potential biases in sequential hand gesture recognition. The implemented model, utilizing CNN with VGG16 architecture, achieved an impressive 99.97% accuracy. The analytical framework was constructed using the iNNvestigate toolbox, employing Layerwise Relevance Propagation (LRP). To further interpret the LRP results, I incorporated agglomerative clustering through the Clustimage library.
For more in-depth information, feel free to connect with me on LinkedIn.
Shamim Miroliaei
This document discusses phased array antennas and antenna synthesis. It describes how a phased array antenna uses multiple antennas with adjustable phase delays to steer beams in different directions. It also covers techniques for antenna synthesis, including Dolph-Chebyshev and Taylor methods, to design arrays with low sidelobes and optimize parameters like element spacing and excitation amplitudes. Finally, it compares conventional antennas to smart antenna arrays, noting that adaptive arrays can actively direct beams towards desired signals while rejecting interference.
This document provides an introduction and overview of stochastic frontier analysis, which models a production frontier as a stochastic function to account for noise in production. It discusses estimating the parameters of a stochastic frontier model using maximum likelihood, predicting technical efficiency at the firm and industry level, and hypothesis testing using likelihood ratio tests. The key steps are estimating the stochastic frontier model, predicting technical efficiencies based on the estimates, and testing hypotheses about inefficiency effects.
Big Data Competition: maximizing your potential exampled with the 2014 Higgs...Cheng Chen
The Higgs Boson Machine Learning Challenge is, by far, one of the biggest big data competitions focusing on data analysis in the world. To be successful in such a competition, Cheng applied his knowledge in Computer Science, Mathematics, Statistics, and Physics, while his problem solving habit is developed during his training in Civil Engineering.
In this presentation, Cheng will use his experience in this competition to illustrate some important elements in big data analytics and why they are important. The content of the presentation covers different disciplines such as physics, statistics, and mathematics. But no background knowledge of these areas are required to understand the essence of the presentation.
In brief, the presentation covers the following content:
An effective framework for general data mining projects,
Introduction of the competition and its related physics background,
Various techniques in data exploring and some traps to avoid,
Various ways of feature enhancement,
Model building and selection, and
Optimization of model performance
Deep Learning-Based Universal Beamformer for Ultrasound ImagingShujaat Khan
In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.
This document provides an overview of the LABOCA instrument on APEX. It describes LABOCA as a 295 bolometer camera operating at 345 GHz installed on the APEX 12m telescope in Chile. Key details include the tertiary optics that couple light to the bolometer array, the semiconductor bolometers maintained at 300mK, and the data acquisition system. Observing modes include on-the-fly mapping and spiral scans over the 11.4' field of view to make deep submillimeter maps of the sky.
IRJET- Synchronization Scheme of MIMO-OFDM using Monte Carlo MethodIRJET Journal
This document proposes a synchronization scheme for MIMO-OFDM using the Monte Carlo method. It involves using an iterative turbo receiver with two levels - a soft-input soft-output SSMC detector followed by a soft channel decoder. The detector and decoder exchange information iteratively to reduce the bit error rate. Simulation results show that the proposed Monte Carlo simulation method decreases error rate as the number of transmitters and receivers increases. Bit error rate and mean squared error rate are compared for different signal-to-noise ratios to demonstrate the performance of the system.
IEEE Student Branch Chittagong University arranged a webinar titled "From APECE to ASML A Semiconductor Journey". Shawn Millat shared his working experience in Semiconductor industry and also shared tips about studying in Germany.
This document proposes a low complexity beam training method for hybrid MIMO in IEEE 802.11ay. It involves a two-stage approach: 1) Sector Level Sweep (SLS) stage to find analog beamformers using beam-to-omni training, and 2) Beam Refinement Phase (BRP) to improve transmit-receive beam combinations using beam-to-beam training. Two methods are described - Evolutionary Beamtraining and K-Best Beamtraining. Simulation results show the K-Best method reduces complexity by 93% compared to Pairwise Search, with negligible loss in MIMO capacity performance. Minor protocol and frame structure changes are needed to support the proposed approach within existing 802.11ad standards.
This presentation discusses how much fiber is needed for 5G coverage and the value of converging fiber optic networks for 5G and fiber-to-the-home (FTTH). It finds that deploying FTTH networks with some spare fiber capacity reserved for future 5G backhaul can significantly reduce 5G fiber deployment costs compared to building separate 5G fiber networks later. Maintaining around 24-48% spare fiber capacity in FTTH networks yields total cost savings similar to an optimally converged single network. Key challenges to reserving spare capacity are competing FTTH providers using the capacity and ensuring it remains available for future 5G.
(Paper note) Real time rgb-d camera relocalization via randomized ferns for k...e8xu
Paper note
I take a note on relocalization module of ElasticFusion, and try to address why ElasticFusion failed (loss track) on certain image sequences. Providing some improvement direction for futhure research.
Glocker, Ben, et al. "Real-time rgb-d camera relocalization via randomized ferns for keyframe encoding." IEEE transactions on visualization and computer graphics 21.5 (2015): 571-583.
Similar to 2019 CtD: A hybrid deep learning approach to vertexing (20)
Modern binary build systems have made shipping binary packages for Python much easier than ever before. This talk discusses three of the most popular build systems for Python packages using the new standards developed for packaging.
This document discusses software quality assurance tooling, focusing on pre-commit. It introduces pre-commit as a tool for running code quality checks before code is committed. Pre-commit allows configuring hooks that run checks and fixers on files matching certain patterns. Hooks can be installed from repositories and support many languages including Python. The document provides examples of pre-commit checks such as disallowing improper capitalization in code comments and files. It also discusses how to configure, run, update and install pre-commit hooks.
The document summarizes Henry Schreiner's work on several Python and C++ scientific computing projects. It describes a scientific Python development guide built from the Scikit-HEP summit. It also outlines Henry's work on pybind11 for C++ bindings, scikit-build for building extensions, cibuildwheel for building wheels on CI, and several other related projects.
Flake8 is a Python linter that is fast, simple, and extensible. It can be configured through setup.cfg or .flake8 files to ignore certain checks or select others. The summary recommends using the flake8-bugbear plugin and avoiding all print statements with flake8-print. Linters like Flake8 help find errors, improve code quality, and avoid historical baggage, but one does not need every check and it is okay to build a long ignore list.
The document describes various productivity tools for Python development, including:
- Pre-commit hooks to run checks before committing code
- Hot code reloading in Jupyter notebooks using the %load_ext and %autoreload magic commands
- Cookiecutter for generating project templates
- SSH configuration files and escape sequences for easier remote access
- Autojump to quickly navigate frequently visited directories
- Terminal tips like command history search and referencing the last argument
- Options for tracking Jupyter notebooks with git like stripping outputs or synchronizing notebooks and Python files.
SciPy22 - Building binary extensions with pybind11, scikit build, and cibuild...Henry Schreiner
Building binary extensions is easier than ever thanks to several key libraries. Pybind11 provides a natural C++ language for extensions without requiring pre-processing or special dependencies. Scikit-build ties the premier C++ build system, CMake, into the Python extension build process. And cibuildwheel makes it easy to build highly compatible wheels for over 80 different platforms using CI or on your local machine. We will look at advancements to all three libraries over the last year, as well as future plans.
This document discusses the history and development of Python packages for high energy physics (HEP) analysis. It describes how experiments initially used ROOT and C++, but Python gained popularity for configuration and analysis. This led to the creation of packages like Scikit-HEP, Uproot, and Awkward Array to bridge the gap between ROOT files and the Python data science stack. Scikit-HEP grew to include many related packages and provides best practices through its developer pages. The future may include adopting Scikit-build for building Python packages with C/C++ extensions and running packages in the browser via WebAssembly.
PyCon 2022 -Scikit-HEP Developer Pages: Guidelines for modern packagingHenry Schreiner
This was a PyCon 2022 lightning talk over the Scikit-HEP developer pages. It highlights best practices and guides shown there, and the quick package creation cookiecutter. And finally it demos the Pyodide WebAssembly app embedded into the Scikit-HEP developer pages!
Talk at PyCon2022 over building binary packages for Python. Covers an overview and an in-depth look into pybind11 for binding, scikit-build for creating the build, and build & cibuildwheel for making the binaries that can be distributed on PyPI.
Digital RSE: automated code quality checks - RSE group meetingHenry Schreiner
Given at a local RSE group meeting. Covers code quality practices, focusing on Python but over multiple languages, with useful tools highlighted throughout.
This document provides best practices for using CMake, including:
- Set the cmake_minimum_required version to ensure modern features while maintaining backward compatibility.
- Use targets to define executables and libraries, their properties, and dependencies.
- Fetch remote dependencies at configure time using FetchContent or integrate with package managers like Conan.
- Import library targets rather than reimplementing Find modules when possible.
- Treat CUDA as a first-class language in CMake projects.
HOW 2019: A complete reproducible ROOT environment in under 5 minutesHenry Schreiner
The document discusses setting up a ROOT environment using Conda in under 5 minutes. It describes downloading and installing Miniconda and then using Conda commands to create a new environment and install ROOT and its dependencies from the conda-forge channel. The ROOT package provides full ROOT functionality, including compilation and graphics, and supports Linux, macOS, and multiple Python versions.
2019 IRIS-HEP AS workshop: Boost-histogram and histHenry Schreiner
The document discusses the current state of histograms in Python and the need for a new histogramming library. It introduces boost-histogram, a C++ histogramming library, and its new Python bindings. The bindings aim to provide a fast, flexible and easily distributable histogram object for Python. Key features discussed include histogram design that treats it as a first-class object, fast filling via multi-threading, a variety of axis and storage types, and performance benchmarks showing it can be over 10x faster than NumPy for filling histograms. Distribution is focused on providing binary wheels for many platforms via continuous integration.
The document discusses the current state of histograms in Python and the need for a new library. It introduces boost-histogram, a C++ histogram library, and its new Python bindings. The bindings aim to provide a fast, flexible, and easily distributable histogram object for Python with support for multiple axis types and storage options. It also discusses plans for an additional wrapper library called hist for easy plotting and interfacing with other tools.
2019 IRIS-HEP AS workshop: Particles and decaysHenry Schreiner
The Scikit-HEP project aims to create an ecosystem for particle physics data analysis in Python. It includes packages like Particle and DecayLanguage that provide tools for working with particle data and decay descriptions. Particle allows users to easily access and search particle property data from sources like the PDG. DecayLanguage allows parsing decay file formats, representing and manipulating decay chains, and converting between decay model representations. Future work includes expanding particle ID support and improving visualization of decay trees.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Pushing the limits of ePRTC: 100ns holdover for 100 days
2019 CtD: A hybrid deep learning approach to vertexing
1. A hybrid deep learning approach to vertexing
Rui Fang1
Henry Schreiner1, 2
Mike Sokoloff1
Constantin Weisser3
Mike Williams3
April 3, 2019
1
The University of Cincinnati
2
Princeton University
3
Massachusetts Institute of Technology
CtD/WIT 2019
Supported by:
2. 0 5 10 15 20 25 30 35 40 45 50 55 60
# LHCb long tracks
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Efficiency
Found 103002 of 109733 (eff 93.87%)
False positive rate = 0.251 per event
Asymmetric cost function
Found 96616 of 109733 (eff 88.05%)
False positive rate = 0.0485 per event
Symmetric cost function
Events in sample = 20K
Training sample = 240K
0 5 10 15 20 25 30 35 40 45 50 55 60
# LHCb long tracks
102
103
104
PVs
1/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
3. Tracking in the LHCb upgrade Introduction
The changes
• 30 MHz software trigger
• 7.6 PVs per event (Poisson distribution)
• Roughly 5.5 visible PVs per event
The problem
• Much higher pileup
• Very little time to do the tracking
• Current algorithms too slow
We need to rethink our algorithms from the ground up...
2/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
4. Vertices and tracks Introduction
Vertices
• Events contain ≈ 7 Primary Vertices (≈ 5
visible PVs)
A PV should contain 5+ long tracks
• Multiple Secondary Vertices (SVs) per
event as well
A SV should contain 2+ tracks
Beams
PV
Track
SV
Adapt to machine learning?
• Sparse 3D data (41M pixels) → rich 1D data
• 1D convolutional neural nets
• Highly parallelizable, GPU friendly
• Opportunities to visualize learning process
3/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
5. A hybrid ML approach Introduction
Tracking Kernel generation Make predictions
CNNs
Interpret results
Truth Training
Validation
Machine learning features (so far)
• Prototracking converts sparse 3D dataset to feature-rich 1D dataset
• Easy and effective visualization due to 1D nature
• Even simple networks can provide interesting results
What follows is a proof of principle implementation for finding PVs.
4/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
6. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
z axis (along the beam)
x PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
7. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
• Make a 3D grid of voxels (2D shown)
• Note: only z will be fully calculated and
stored
z axis (along the beam)
x PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
8. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
• Make a 3D grid of voxels (2D shown)
• Note: only z will be fully calculated and
stored
• Tracking (full or partial)
z axis (along the beam)
x PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
9. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
• Make a 3D grid of voxels (2D shown)
• Note: only z will be fully calculated and
stored
• Tracking (full or partial)
• Fill in each voxel center with Gaussian PDF
z axis (along the beam)
x PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
10. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
• Make a 3D grid of voxels (2D shown)
• Note: only z will be fully calculated and
stored
• Tracking (full or partial)
• Fill in each voxel center with Gaussian PDF
• PDF for each (proto)track is combined
z axis (along the beam)
x PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
11. Kernel generation Design
Tracking procedure
• Hits lie on the 26 planes
• For simplicity, only 3 tracks shown
• Make a 3D grid of voxels (2D shown)
• Note: only z will be fully calculated and
stored
• Tracking (full or partial)
• Fill in each voxel center with Gaussian PDF
• PDF for each (proto)track is combined
• Fill z “histogram” with maximum KDE value
in xy
z axis (along the beam)
x
Kernel
PV
5/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
12. Example of z KDE histogram Design
100 50 0 50 100 150 200 250 300
z values [mm]
0
500
1000
1500
2000
DensityofKernel
Kernel
LHCb PVs
Other PVs
LHCb SVs
Other SVs
Note: All events from toy detector simulation
Human learning
• Peaks generally correspond to PVs and SVs
Challenges
• Vertex may be offset from peak
• Vertices interact
6/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
13. Target distribution Design
Build target distribution
• True PV position as the mean of Gaussian
• σ (standard deviation) is 100 µm (simplification)
• Fill bins with integrated PDF within ±3 bins (±300 µm)
7/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
15. Cost function Design
10 6 10 5 10 4 10 3 10 2 10 1 100
yhat
0
10
20
30
40
50
60
cost
0.0 0.2 0.4 0.6 0.8
yhat
0
5
10
15
20
25
30
cost
Asym. Cost for y = 0.10
Symm. Cost for y = 0.10
Asym. Cost for y = 0.30
Symm. Cost for y = 0.30
Asym. Cost for y = 1e-5
Symm. Cost for y = 1e-5
0.2 0.4 0.6 0.8 1.0
yhat
0
2
4
6
8
10
cost
Approach
• Symmetric cost function: low FP but low efficiency
• Adding asymmetry term controls trade-off for FP vs. efficiency
9/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
16. False Positive and efficiency rates Results
88 89 90 91 92 93 94
Efficiency [%]
0.05
0.10
0.15
0.20
0.25FPperevent
Symm cost
Most asymm cost
88 89 90 91 92 93 94
Efficiency [%]
10 1
6×10 2
2×10 1
FPperevent
Symm cost
Most asymm cost
Search for PVs (handwritten, maybe not optimial)
• Search ±5 bins (±500µm) around a true PV
• At least 3 bins with predicted probability > 1% and
integrated probability > 20%.
Tunable efficiency vs. FP
• The asymmetry parameter
controls FP vs. efficiency
10/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
17. Compare predictions with targets: Examples Results
0
100
200
300
400
500
KernelDensity
True: 197.461 mm
Pred: 197.396 mm
: -65 µm
Event 5 @ 197.4 mm: PV found
Kernel Density
195.00 196.00 197.00 198.00 199.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
PV found example
0
200
400
600
800
1000
1200
1400
1600
KernelDensity
True: 36.068 mm
Pred: 36.400 mm
: 332 µm
Event 6 @ 36.1 mm: PV found
Kernel Density
34.00 35.00 36.00 37.00 38.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
PV found example
11/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
18. Compare predictions with targets: When it works Results
0
200
400
600
800
1000
1200
KernelDensity
True: 48.904 mm
Pred: 48.954 mm
: 50 µm
Event 0 @ 48.9 mm: PV found
Kernel Density
47.00 48.00 49.00 50.00 51.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
PV found example
0
50
100
150
200
KernelDensity
Pred: 0.976 mm
Event 0 @ 1.0 mm: Masked
Kernel Density
-1.00 0.00 1.00 2.00 3.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
Masked (<5 tracks) example
12/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
19. Compare predictions with targets: When it fails Results
0
50
100
150
200
250
KernelDensity
Pred: 65.696 mm
Event 2 @ 65.7 mm: False positive
Kernel Density
64.00 65.00 66.00 67.00 68.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
False Positive example
0
100
200
300
400
500
KernelDensity
True: 51.898 mm
Event 3 @ 51.9 mm: PV not found
Kernel Density
50.00 51.00 52.00 53.00 54.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
PV not found example
13/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
20. Future addition: xy information Future plans
Adding xy information
• Point of maximum z in xy available
• Extra information: sharp discontinuities
between PVs
• Need iterative approach or “reduced
importance”
What about a full 2D kernel?
• Not needed for LHCb currently (large xy,
“low” z overlap)
• Might be useful for other detectors!
0
500
1000
1500
2000
KernelDensity
True: 114.622 mm
Pred: 114.597 mm
: -26 µm
Event 2 @ 114.6 mm: PV found
Kernel Density
113.00 114.00 115.00 116.00 117.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
14/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
21. Conclusions and plans Future plans
0 5 10 15 20 25 30 35 40 45 50 55 60
# LHCb long tracks
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Efficiency
• Proof-of-Principle established: a hybrid ML algorithm
using a 1-dimensional KDE processed by a 5-layer CNN
finds primary vertices with efficiencies and false positive
rates similar to traditional algorithms.
• Efficiency is tunable; increasing the efficiency also
increases the false positive rate.
• Adding information should improve performance.
• can add KDE (x,y) information to algorithm
• can associate tracks to PV candidates, then iterate.
• Next steps: train with full LHCb MC and deploy
inference engine in LHCb Hlt1 framework.
• Beyond LHCb
• approach might work for ATLAS and CMS (in 2D?);
• algorithm is an interesting ML laboratory.
15/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
22. Final words Future plans
Source code:
• https://gitlab.cern.ch/LHCb-Reco-Dev/pv-finder
• Runnable with Conda on macOS and Linux
Run: conda env create -f environment-gpu.yml
Python 3.6+ and PyTorch used for machine learning code
Generation now available too using the new Conda-Forge
ROOT and Pythia8 packages
Supported by:
• NSF OAC-1836650:
IRIS-HEP
• NSF OAC-1740102:
SI2:SSE
• NSF OAC-1739772:
SI2:SSE
16/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
23. Final words Future plans
Questions?
Source code:
• https://gitlab.cern.ch/LHCb-Reco-Dev/pv-finder
• Runnable with Conda on macOS and Linux
Run: conda env create -f environment-gpu.yml
Python 3.6+ and PyTorch used for machine learning code
Generation now available too using the new Conda-Forge
ROOT and Pythia8 packages
Supported by:
• NSF OAC-1836650:
IRIS-HEP
• NSF OAC-1740102:
SI2:SSE
• NSF OAC-1739772:
SI2:SSE
16/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
24. More predictions with targets (1) Backup
0
50
100
150
200
KernelDensity
True: 221.595 mm
Pred: 221.546 mm
: -49 µm
Event 5 @ 221.5 mm: PV found
Kernel Density
219.00 220.00 221.00 222.00 223.00 224.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
0
500
1000
1500
2000
KernelDensity
True: 114.622 mm
Pred: 114.597 mm
: -26 µm
Event 2 @ 114.6 mm: PV found
Kernel Density
113.00 114.00 115.00 116.00 117.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
17/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
25. More predictions with targets (2) Backup
0
200
400
600
800
1000
1200
1400
1600
KernelDensity
True: 129.336 mm
Pred: 129.337 mm
: 1 µm
Event 6 @ 129.3 mm: PV found
Kernel Density
127.00 128.00 129.00 130.00 131.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
0
500
1000
1500
2000
KernelDensity
True: 143.224 mm
Pred: 143.199 mm
: -25 µm
Event 6 @ 143.2 mm: PV found
Kernel Density
141.00 142.00 143.00 144.00 145.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
18/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
26. More predictions with targets (3) Backup
0
50
100
150
200
250
300
350
400
KernelDensity
True: 150.650 mm
Pred: 150.416 mm
: -234 µm
Event 6 @ 150.4 mm: PV found
Kernel Density
148.00 149.00 150.00 151.00 152.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
0
500
1000
1500
2000
2500
KernelDensity
True: 179.560 mm
Pred: 179.591 mm
: 31 µm
Event 6 @ 179.6 mm: PV found
Kernel Density
178.00 179.00 180.00 181.00 182.00
z values [mm]
150
100
50
0
50
100
150
xymaximum[m]
x
y
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability
Target
Predicted
Masked
19/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
27. The VELO Backup
Tracks
• Originate from vertices (not shown)
• Hits originate from tracks
• We only know the true track in simulation
• Nearly straight, but tracks may scatter in material
The VELO
• A set of 26 planes that detect tracks
• Tracks should hit one or more pixels per plane
• Sparse 3D dataset (41M pixels)
20/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019
28. Questions for other experiments Backup
• Beam width (x, y): 40 µm for LHCb, what is yours?
• Transverse resolution: 5–15 µm for LHCb depending on number of tracks, what is yours?
• Longitudinal resolution: 40–100 µm for LHCb depending on number of tracks, what is
yours?
• Cleaning up prototracks based on IP could simplify kernel
• Can prototracking be done in the triggers?
21/16Fang, Schreiner, Sokoloff, Weisser, Williams
A hybrid deep learning approach to vertexing
April 3, 2019