The document describes EasyEDD, a software for analyzing tomographic electron diffraction data (TEDDI) obtained from synchrotron sources. EasyEDD allows batch processing and visualization of large diffraction data sets. It stores data in a 3D grid format and includes tools for corrections, fitting, visualization and exporting results. The software combines a graphical user interface with algorithms for numerical analysis. Current functionality and future improvements are outlined.
Proposal and Implementation of the Connected-Component Labeling of Binary Ima...CSCJournals
The connected-component labeling (CCL) is a technique for extracting connected pixels having the same value. It is mainly used for abnormality diagnosis of products, and for extracting noise areas of products. In the extraction of product areas in product diagnosis, a hole filling processing (HFP) is used to complement discolored areas. However, the HFP is inefficient, because the CCL needs to be executed twice in the foreground and background, and half of the threads are idle during each process. In this study, we propose a rewriting method for continuous label IDs with pixel-by-pixel parallelism, and a HFP method using simultaneous CCL of foreground and background. We implemented and evaluated these methods on Jetson TX2. The rewriting process to the continuous label ID is 3.7-13.8 times faster than the conventional method of sequential processing on the CPU, and on average 9.2 times faster. For the HFP using simultaneous CCL, we implemented and verified the conventional method that requires twice the CCL and the proposed method that can extract the foreground and background with one CCL. The performance of the proposed method is about 13-27% better than that of the conventional method. In addition, in the lightweight object detection method that is an application using the proposed method, the facial detection time is about 14 ms, which is about 60 times faster than the conventional method. As a result, the facial detection processing with high computational complexity can be operated practically even on an inexpensive and small processor. The CCL process for GPGPU has little room for optimization, and it has been difficult to achieve higher speeds. However, we focused on wasted idols in the HFP, proposed a method to reduce and supplement them, and realized a faster HFP than the conventional method.
Color Based Object Tracking with OpenCV A SurveyYogeshIJTSRD
Object tracking is a rapidly growing field in machine learning. Object tracking is exactly what name suggests, to keep tracking of an object. This method has all sorts of application in wide range of fields like military, household, traffic cameras, industries, etc. There are certain algorithms for the object tracking but the easiest one is color based object detection. This is a color based algorithm for object tracking supported very well in OpenCV library. OpenCV is an library popular among python developers, those who are interested in Computer vision. It is an open source library and hence anyone can use and modify it without any restrictions and licensing. The Color based method of object tracking is fully supported by OpenCVs vast varieties of functions. There is little bit of simple math and an excellent logic behind this method of object tracking. But in simple language the target object is identified from and image given explicitly by user or some area selected from frame of video, and algorithm continuously search for that object from each frame in video and highlights the best match for every frame. But like every algorithm it also has some pros and cons which are discussed here. Vatsal Bambhania | Harshad P Patel "Color Based Object Tracking with OpenCV - A Survey" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39964.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/39964/color-based-object-tracking-with-opencv--a-survey/vatsal-bambhania
Proposal and Implementation of the Connected-Component Labeling of Binary Ima...CSCJournals
The connected-component labeling (CCL) is a technique for extracting connected pixels having the same value. It is mainly used for abnormality diagnosis of products, and for extracting noise areas of products. In the extraction of product areas in product diagnosis, a hole filling processing (HFP) is used to complement discolored areas. However, the HFP is inefficient, because the CCL needs to be executed twice in the foreground and background, and half of the threads are idle during each process. In this study, we propose a rewriting method for continuous label IDs with pixel-by-pixel parallelism, and a HFP method using simultaneous CCL of foreground and background. We implemented and evaluated these methods on Jetson TX2. The rewriting process to the continuous label ID is 3.7-13.8 times faster than the conventional method of sequential processing on the CPU, and on average 9.2 times faster. For the HFP using simultaneous CCL, we implemented and verified the conventional method that requires twice the CCL and the proposed method that can extract the foreground and background with one CCL. The performance of the proposed method is about 13-27% better than that of the conventional method. In addition, in the lightweight object detection method that is an application using the proposed method, the facial detection time is about 14 ms, which is about 60 times faster than the conventional method. As a result, the facial detection processing with high computational complexity can be operated practically even on an inexpensive and small processor. The CCL process for GPGPU has little room for optimization, and it has been difficult to achieve higher speeds. However, we focused on wasted idols in the HFP, proposed a method to reduce and supplement them, and realized a faster HFP than the conventional method.
Color Based Object Tracking with OpenCV A SurveyYogeshIJTSRD
Object tracking is a rapidly growing field in machine learning. Object tracking is exactly what name suggests, to keep tracking of an object. This method has all sorts of application in wide range of fields like military, household, traffic cameras, industries, etc. There are certain algorithms for the object tracking but the easiest one is color based object detection. This is a color based algorithm for object tracking supported very well in OpenCV library. OpenCV is an library popular among python developers, those who are interested in Computer vision. It is an open source library and hence anyone can use and modify it without any restrictions and licensing. The Color based method of object tracking is fully supported by OpenCVs vast varieties of functions. There is little bit of simple math and an excellent logic behind this method of object tracking. But in simple language the target object is identified from and image given explicitly by user or some area selected from frame of video, and algorithm continuously search for that object from each frame in video and highlights the best match for every frame. But like every algorithm it also has some pros and cons which are discussed here. Vatsal Bambhania | Harshad P Patel "Color Based Object Tracking with OpenCV - A Survey" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39964.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/39964/color-based-object-tracking-with-opencv--a-survey/vatsal-bambhania
AUTO AI 2021 talk Real world data augmentations for autonomous driving : B Ra...Ravi Kiran B.
Modern perception pipelines in autonomous driving (AD) systems are based on Deep Neural Networks (DNNs) which utilize multiple hyper-parameter configurations and training strategies. Data augmentations is now a well-established training strategy to improve the generalization of DNNs, especially in a low dataset regime. Self-supervised learning and semi-supervised methods depend heavily on data augmentation strategies. In this study we view generalization due to data augmentations training DNNs since they implicitly model the geometric, viewpoint based transformations present on images/pointclouds due to noise, perspective, motion of the ego-vehicle. We shortly review current data augmentation strategies for perception tasks in AD, and recent developments on understanding its effects on model generalization.
In the talk we shall review data augmentation strategies through two case studies:
- Improving model performance of monocular 3D object detection model by using geometry preserving data augmentations on images
- Understand the role of data augmentation in reducing data redundancy and improving label efficiency within an active learning pipeline
Extreme Scale Breadth-First Search on SupercomputersToyotaro Suzumura
Conference : IEEE BigData 2016, Dec 2016, Washington DC, USA
Title: Efficient Breadth-First Search on Massively Parallel and Distributed Memory Machines
Breadth-First Search(BFS) is one of the most fundamental graph algorithms used as a component of many graph algorithms. Our new method for distributed parallel BFS can compute BFS for one trillion vertices graph within half a second, using large supercomputers such as the K- Computer. By the use of our proposed algorithm, the K- Computer was ranked 1st in Graph500 using all the 82,944 nodes available on June and November 2015 and June 2016 38,621.4 GTEPS. Based on the hybrid-BFS algorithm by Dr. Beamer, we devise sets of optimizations for scaling to extreme number of nodes, including a new efficient graph data structure and optimization techniques such as vertex reordering and load balancing. Performance evaluation on the K shows our new BFS is 3.19 times faster on 30,720 nodes than the base version using the previously-known best techniques.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Dynamic Two-Stage Image Retrieval from Large Multimodal DatabasesKonstantinos Zagoris
Content-based image retrieval (CBIR) with global features is notoriously noisy, especially for image queries with low percentages of relevant images in a collection. Moreover, CBIR typically ranks the whole collection, which is inefficient for large databases. We experiment with a method for image retrieval from multimodal databases, which improves both the effectiveness and efficiency of traditional CBIR by exploring secondary modalities. We perform retrieval in a two-stage fashion: first rank by a secondary modality, and then perform CBIR only on the top-K items. Thus, effectiveness is improved by performing CBIR on a ‘better’ subset. Using a relatively ‘cheap’ first stage, efficiency is also improved via the fewer CBIR operations performed. Our main novelty is that K is dynamic, i.e. estimated per query to optimize a predefined effectiveness measure. We show that such dynamic two-stage setups can be significantly more effective and robust than similar setups with static thresholds previously proposed.
AUTO AI 2021 talk Real world data augmentations for autonomous driving : B Ra...Ravi Kiran B.
Modern perception pipelines in autonomous driving (AD) systems are based on Deep Neural Networks (DNNs) which utilize multiple hyper-parameter configurations and training strategies. Data augmentations is now a well-established training strategy to improve the generalization of DNNs, especially in a low dataset regime. Self-supervised learning and semi-supervised methods depend heavily on data augmentation strategies. In this study we view generalization due to data augmentations training DNNs since they implicitly model the geometric, viewpoint based transformations present on images/pointclouds due to noise, perspective, motion of the ego-vehicle. We shortly review current data augmentation strategies for perception tasks in AD, and recent developments on understanding its effects on model generalization.
In the talk we shall review data augmentation strategies through two case studies:
- Improving model performance of monocular 3D object detection model by using geometry preserving data augmentations on images
- Understand the role of data augmentation in reducing data redundancy and improving label efficiency within an active learning pipeline
Extreme Scale Breadth-First Search on SupercomputersToyotaro Suzumura
Conference : IEEE BigData 2016, Dec 2016, Washington DC, USA
Title: Efficient Breadth-First Search on Massively Parallel and Distributed Memory Machines
Breadth-First Search(BFS) is one of the most fundamental graph algorithms used as a component of many graph algorithms. Our new method for distributed parallel BFS can compute BFS for one trillion vertices graph within half a second, using large supercomputers such as the K- Computer. By the use of our proposed algorithm, the K- Computer was ranked 1st in Graph500 using all the 82,944 nodes available on June and November 2015 and June 2016 38,621.4 GTEPS. Based on the hybrid-BFS algorithm by Dr. Beamer, we devise sets of optimizations for scaling to extreme number of nodes, including a new efficient graph data structure and optimization techniques such as vertex reordering and load balancing. Performance evaluation on the K shows our new BFS is 3.19 times faster on 30,720 nodes than the base version using the previously-known best techniques.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Dynamic Two-Stage Image Retrieval from Large Multimodal DatabasesKonstantinos Zagoris
Content-based image retrieval (CBIR) with global features is notoriously noisy, especially for image queries with low percentages of relevant images in a collection. Moreover, CBIR typically ranks the whole collection, which is inefficient for large databases. We experiment with a method for image retrieval from multimodal databases, which improves both the effectiveness and efficiency of traditional CBIR by exploring secondary modalities. We perform retrieval in a two-stage fashion: first rank by a secondary modality, and then perform CBIR only on the top-K items. Thus, effectiveness is improved by performing CBIR on a ‘better’ subset. Using a relatively ‘cheap’ first stage, efficiency is also improved via the fewer CBIR operations performed. Our main novelty is that K is dynamic, i.e. estimated per query to optimize a predefined effectiveness measure. We show that such dynamic two-stage setups can be significantly more effective and robust than similar setups with static thresholds previously proposed.
On Data Quality Assurance and Conflation Entanglement in Crowdsourcing for En...Greenapps&web
Volunteer geographical information (VGI) either in the context of citizen science, active crowdsourcing and even passive crowdsourcing has been proven useful in various societal domains such as natural hazards, health status, disease epidemic and biological monitoring. Nonetheless, the variable degrees or unknown quality due to the crowdsourcing settings are still an obstacle for fully integrating these data sources in environmental studies and potentially in policy making. The data curation process in which a quality assurance (QA) is needed is often driven by the direct usability of the data collected within a data conflation process or data fusion (DCDF) combining the crowdsourced data into one view using potentially other data sources as well. Using two examples, namely land cover validation and inundation extent estimation, this paper discusses the close links between QA and DCDF in order to determine whether a disentanglement can be beneficial or not to a better understanding of the data curation process and to its methodology with respect to crowdsourcing data. Far from rejecting the usability quality criterion, the paper advocates for a decoupling of the QA process and the DCDF step as much as possible but still in integrating them within an approach analogous to a Bayesian paradigm.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-3/W5, 2015
This study finds that there is strong support for community level approaches to forest management. Securing community forest tenure through clarifying land claims and integrating local land tenure into spatial planning is a key step to achieving sustainable forest management.
Primend Pilvekonverents - Äri vajadused = IT eesmärgid Primend
Eesti juhtiv ehitusettevõte, AS Merko Ehitus Eesti, räägib Office 365 juurutamise kogemustest.
Miks otsustas Eesti juhtiv ehitusettevõte valida Office 365 teenuse? Millised olid nende vajadused ja ootused uuele süsteemile ja kas need said rahuldatud? Millised head ja vead on selgunud kasutamise käigus ning mis kõige olulisem – kuidas on rahul igapäevane tavakasutaja, kes vajab O365 teenust produktiivseks töötegemiseks?
Grow your international software service firmHugo Messer
A workshop in Kharkiv, June 2016 about growing an international software business. We discussed the 'growth cycle' of outsourcing companies and where companies get stuck. We used at different frameworks to help companies get unstuck:
The business model canvas
Value proposition canvas
One page strategic plan
Distributed team canvas
Die neueste Broschüre der LAG Schulbibliotheken in Hessen e. V.
Redaktion: Hans-Günther Brée und Günter K. Schlamp.
Sie kann als gedruckte Broschüre gegen Versandkostenerstattung angefordert werden. Download nach Absprache möglich.
Twitter, Facebook, LinkedIn, Pinterest & Co kennen Sie, aber mit welchen Tools arbeiten Profis, um (automatisch) mehr Follower zu gewinnnen, Veränderungen in beobachteten Accounts zu erkennen und Rückschlüsse aus der Präsenz von Personen in Social Media zu ziehen?
Exploration and 3D GIS Software - MapInfo Professional Discover3D 2015Prakher Hajela Saxena
MapInfo Discover3D 2015 is the latest version of the software available in the industry today with tools to enhance your Geological Mapping, and 3D Exploration modeling capabilities.
Digital Fabrication Studio.03 _Software @ Aalto Media FactoryMassimo Menichinelli
DIGITAL FABRICATION STUDIO (25438)
The course provides a general understanding on how to design and manufacture products and prototypes in a Fab Lab, using digital fabrication technologies and understanding their features and limits.
Students will learn how information shapes design, manufacturing and collaboration processes and artifacts in a Fab Lab. They will learn how to digitally fabricate a project or how to digitally modify an existing project; students will also learn how to manage, embed and retrieve information about a project. Projects and prototypes developed and manufactured in this course will not be interactive.
The course consists of lectures and a group project to be digitally fabricated, be it a project already designed but not yet realized or be it the modification of an existing project. Every lecture (3 hours) includes time for testing the technologies covered (1 hour) and for developing part of the group project and for receiving feedback about it (1 hour).
http://mlab.taik.fi/studies/courses/course?id=1963
Eclipse Con Europe 2014 How to use DAWN Science ProjectMatthew Gerring
This is a talk given at Eclipse Con Europe 2014 on how to use the open source project DAWN, Data Analysis Workbench. This project has two papers with more than three hundred citations of using the software.
WSO2 Machine Learner takes data one step further, pairing data gathering and analytics with predictive intelligence: this helps you understand not just the present, but to predict scenarios and generate solutions for the future.
Presented as a pre-conference tutorial at the GPU Technology Conference in San Jose on September 20, 2010.
Learn about NVIDIA's OpenGL 4.1 functionality available now on Fermi-based GPUs.
Learning to Spot and Refactor Inconsistent Method NamesDongsun Kim
To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9% F1-measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25% accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1% accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild.
presentation about 2 emerging standards activities that I started and led in MPeG, point cloud compression on a new image and video format, and NBMP for media delivery in 5G networks. Presented at Philips R&D in Eindhoven the Netherlands
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Predicting property prices with machine learning algorithms.pdf
Easy edd phd talks 28 oct 2008
1. Easy EDDEasy EDD
High Throughput Powder Diffraction ProgramHigh Throughput Powder Diffraction Program
Taha SochiTaha Sochi
2. TEDDITEDDI
Tomographic imaging technique which exploitsTomographic imaging technique which exploits
synchrotron to gain diffraction information fromsynchrotron to gain diffraction information from
volume elements within a bulk sample.volume elements within a bulk sample.
Used to image the interiors of objects in termsUsed to image the interiors of objects in terms
of both density and compositional variations.of both density and compositional variations.
Each volume elementEach volume element
visited yields avisited yields a
diffraction patterndiffraction pattern
3. SoftwareSoftware
Currently there is no customised software forCurrently there is no customised software for
TEDDI analysis. Instead, scripts are in use:TEDDI analysis. Instead, scripts are in use:
Read dataRead data
Beam & counting efficiency correctionsBeam & counting efficiency corrections
Visualisation stepVisualisation step
Export to Rietica/TopasExport to Rietica/Topas
Fitting in Rietica/TopasFitting in Rietica/Topas
Visualisation of final resultsVisualisation of final results
4. EasyEDDEasyEDD
High throughput software to manage, process,High throughput software to manage, process,
analyse and visualise powder diffraction data.analyse and visualise powder diffraction data.
Purpose: processing large quantities of dataPurpose: processing large quantities of data
with ease and comfort using limited time andwith ease and comfort using limited time and
computing resources. This batch-processingcomputing resources. This batch-processing
approach is desperately needed for the newapproach is desperately needed for the new
generation of high throughput TEDDI detectors.generation of high throughput TEDDI detectors.
The data is stored in a 3D vector. The basic unitThe data is stored in a 3D vector. The basic unit
is a “voxel” object in which all data relevant tois a “voxel” object in which all data relevant to
an individual cell are stored.an individual cell are stored.
5. EasyEDDEasyEDD
Combines Graphic User Interface (GUI)Combines Graphic User Interface (GUI)
technology (e.g. wizards, dialogs, tooltips,technology (e.g. wizards, dialogs, tooltips,
colour coding, context menus, etc.) withcolour coding, context menus, etc.) with
standard scientific computing techniques.standard scientific computing techniques.
6. ResourcesResources
Qt toolkit and its extensions (Qwt andQt toolkit and its extensions (Qwt and
QwtPlot3D) for GUI design.QwtPlot3D) for GUI design.
Extensive library of scientific numericalExtensive library of scientific numerical
recipes.recipes.
Large number of tailored algorithms, functionsLarge number of tailored algorithms, functions
and techniques.and techniques.
Standard C++ library.Standard C++ library.
7. Current StateCurrent State
Four data file formats are currently supported:Four data file formats are currently supported:
SRS 16.4, ESRF XY data, Diamond MCA, andSRS 16.4, ESRF XY data, Diamond MCA, and
Manchester ERD format. The code can beManchester ERD format. The code can be
easily extended to support other data formats.easily extended to support other data formats.
ERD DetectorERD Detector SRS 16.4SRS 16.4
8. Current StateCurrent State
The data files are read and automaticallyThe data files are read and automatically
recognised (e.g. SRS, scalars or vectors). Therecognised (e.g. SRS, scalars or vectors). The
data is then stored and mapped on a 2Ddata is then stored and mapped on a 2D
colour-coded grid. Multiple tabs from differentcolour-coded grid. Multiple tabs from different
data sources can be created (and removed) atdata sources can be created (and removed) at
the same time.the same time.
Correction, graphing and fitting capabilities areCorrection, graphing and fitting capabilities are
implemented.implemented.
9. Standard GUI window with menus, toolbars, etc.Standard GUI window with menus, toolbars, etc.
ComponentsComponents
10. 2D colour-coded scalable tabs for voxel2D colour-coded scalable tabs for voxel
mapping with graphic and text tooltips to showmapping with graphic and text tooltips to show
all essential file and voxel properties.all essential file and voxel properties.
ComponentsComponents
11. 2D plotter to obtain a graph of intensity for any2D plotter to obtain a graph of intensity for any
voxel by clicking on its cell. It is also used tovoxel by clicking on its cell. It is also used to
create basis functions for fitting.create basis functions for fitting.
ComponentsComponents
The plotter capabilities include:The plotter capabilities include:
Creating, drawing, modifying and clearingCreating, drawing, modifying and clearing
fitting basis functions (polynomials ≤ 6,fitting basis functions (polynomials ≤ 6,
Gauss, Lorentz and pseudo-Voigt) by simpleGauss, Lorentz and pseudo-Voigt) by simple
click or press and drag actions.click or press and drag actions.
Non-linear least squares curve fitting byNon-linear least squares curve fitting by
Levenberg-Marquardt algorithm.Levenberg-Marquardt algorithm.
Save image in several formats.Save image in several formats.
13. Spreadsheet formSpreadsheet form
which interacts with thewhich interacts with the
plotter to control theplotter to control the
refinement processrefinement process
with plotting andwith plotting and
saving capabilities tosaving capabilities to
facilitate massfacilitate mass
application of curveapplication of curve
fitting.fitting.
ComponentsComponents
14. 3D plotter to obtain a graph of the current tab3D plotter to obtain a graph of the current tab
where intensity is plotted as a function of thewhere intensity is plotted as a function of the
voxel position in the tab.voxel position in the tab.
ComponentsComponents
15. Curve fitting can be done on a single orCurve fitting can be done on a single or
multiple peaks using any number of basismultiple peaks using any number of basis
functions with and without background.functions with and without background.
Curve FittingCurve Fitting
Curve fitting can be performed for a singleCurve fitting can be performed for a single
pattern, a number of randomly selectedpattern, a number of randomly selected
patterns, a whole tab or a number of tabs.patterns, a whole tab or a number of tabs.
After curve fitting, a widget is created inAfter curve fitting, a widget is created in
which the statistical indicators andwhich the statistical indicators and
refinement parameters are displayed. Fromrefinement parameters are displayed. From
these the colour code can be changedthese the colour code can be changed
according to each one of these quantities.according to each one of these quantities.
Restraints are partly implemented.Restraints are partly implemented.
16. From Olivier Lazzari:From Olivier Lazzari:
Data SamplesData Samples
Area of a peak after fitting toArea of a peak after fitting to
Gauss with linear backgroundGauss with linear background
Raw data with initial scalingRaw data with initial scaling
Real life pictureReal life picture
of test objectof test object
(From Simon(From Simon
Jacques)Jacques)
Schematic of testSchematic of test
object (Fromobject (From
Olivier Lazzari)Olivier Lazzari)
17. From Vesna Middelkoop:From Vesna Middelkoop:
Data SamplesData Samples
Area of a peak after fitting toArea of a peak after fitting to
Gauss with linear backgroundGauss with linear background
Raw data with initial scalingRaw data with initial scaling
Schematic of pipe (FromSchematic of pipe (From
Vesna Middelkoop)Vesna Middelkoop)
Illustration of TEDDI principleIllustration of TEDDI principle
(From Simon Jacques).(From Simon Jacques).
18. Future DevelopmentFuture Development
Implementing whole pattern decomposition.Implementing whole pattern decomposition.
Mapping data on a 3D grid (tab for each slice)Mapping data on a 3D grid (tab for each slice)
according to the real space coordinates.according to the real space coordinates.
Completing restraints implementation.Completing restraints implementation.
Cleaning and optimising the code.Cleaning and optimising the code.
Investigating other least squares andInvestigating other least squares and
minimisation techniques.minimisation techniques.
Incorporating more scientific functionality suchIncorporating more scientific functionality such
as corrections, deconvolution & final analysis.as corrections, deconvolution & final analysis.
Investigating voxels correlation.Investigating voxels correlation.
Doing experimental work for test and validation.Doing experimental work for test and validation.
19. Thank you!Thank you!
Questions?Questions?
Users & Mailing ListUsers & Mailing List
t.sochi@mail.cryst.bbk.ac.ukt.sochi@mail.cryst.bbk.ac.uk
Current mailing list includes 11 members.Current mailing list includes 11 members.
To join mailing list, send a message to:To join mailing list, send a message to:
The program is currently in use by a number ofThe program is currently in use by a number of
researchers from several institutes, makingresearchers from several institutes, making
valuable contribution in batch-processing hugevaluable contribution in batch-processing huge
amounts of data, and hence saving a lot ofamounts of data, and hence saving a lot of
time and effort.time and effort.