Slides from the FMRIPREP & MRIQC Focus group held Jan 13, 2017 at Stanford University.
MRIQC provides a series of image processing workflows to extract and compute a series of NR (no-reference), IQMs (image quality metrics) to be used in QAPs (quality assessment protocols) for MRI (magnetic resonance imaging). http://mriqc.readthedocs.org
FMRIPREP is an fMRI preprocessing tool that provides robust, easy-to-use preprocessing of MRI data including denoising and normalization. It performs T1-weighted preprocessing like N4 bias field correction and tissue segmentation. It also performs EPI preprocessing including motion correction, skull stripping, coregistration to T1, and estimation of noise confounds. FMRIPREP outputs preprocessed data in native and MNI space. It is designed to be transparent and works on any BIDS-formatted dataset through either Docker or Singularity containers.
Application of Method of Moments to thin wire antennasIvan Tim Oloya
This document is a thesis submitted by Ivan Tim Oloya in partial fulfillment of the requirements for a Bachelor's degree in Electrical and Electronic Engineering at the University of Mauritius. The thesis applies the Method of Moments technique to analyze thin wire antennas. It includes chapters that introduce the MoM, apply it to calculate charge and current distributions on dipole and Gray Hoverman antennas, and determine antenna self-impedances. The document reviews literature on computational electromagnetism and antenna types. It also outlines the objectives, assumptions, and organization of the thesis.
Labeling the virus share malware dataset lessons learnedJohn Seymour
This document summarizes the process of labeling the VirusShare malware dataset for use in malware classification machine learning models. Key points include:
- The VirusShare dataset contains over 27 million malware samples that were labeled using the VirusTotal API by 30 people over 6 months to overcome rate limiting.
- An index was built using PySpark to list each malware label and the chunks it occurs in, to minimize download size and time for retrieving samples.
- Words of warning are provided about not using the labels to compare antivirus vendors and that ground truth may be noisy due to inconsistent antivirus labeling.
- Useful extensions discussed are adding timestamp data and implementing stemming to help address inconsistent labeling issues.
The document discusses various statistical procedures that can be used to analyze experimental data in SigmaPlot, including t-tests, ANOVAs, nonparametric tests, and more. It provides information on how to set up and conduct analyses, interpret results, and generate graphs and reports. The procedures can be used to compare groups, test for differences before and after treatments, perform predictions, and analyze relationships in the data.
The document discusses erbium-doped fiber lasers (EDFLs). EDFLs emit light at 1.55μm, which lies in the eye-safe region of the spectrum and is preferred for long-distance fiber optic communications. They consist of an optical fiber doped with erbium ions as the gain medium, pump lasers to excite the erbium ions, and dielectric mirrors or fiber Bragg gratings to form the optical resonator. EDFLs have revolutionized fiber optic communications and next generation versions may be integrated onto single chips.
Build Knowledge Graphs with Oracle RDF to Extract More Value from Your DataJean Ihm
AnD Summit '19 slides - Souri Das, Matthew Perry, Melli Annamalai. This presentation covers knowledge graphs built using the RDF capabilities of Oracle Spatial and Graph. We will illustrate how to define a knowledge graph, create virtual or materialized graphs from existing data (relational tables, CSV files, etc.), derive new knowledge through logical inference, navigate and query graphs using W3C standards, analyze knowledge graphs with graph algorithms, and more. Real-world use cases from various industries will also be shared.
FMRIPREP is an fMRI preprocessing tool that provides robust, easy-to-use preprocessing of MRI data including denoising and normalization. It performs T1-weighted preprocessing like N4 bias field correction and tissue segmentation. It also performs EPI preprocessing including motion correction, skull stripping, coregistration to T1, and estimation of noise confounds. FMRIPREP outputs preprocessed data in native and MNI space. It is designed to be transparent and works on any BIDS-formatted dataset through either Docker or Singularity containers.
Application of Method of Moments to thin wire antennasIvan Tim Oloya
This document is a thesis submitted by Ivan Tim Oloya in partial fulfillment of the requirements for a Bachelor's degree in Electrical and Electronic Engineering at the University of Mauritius. The thesis applies the Method of Moments technique to analyze thin wire antennas. It includes chapters that introduce the MoM, apply it to calculate charge and current distributions on dipole and Gray Hoverman antennas, and determine antenna self-impedances. The document reviews literature on computational electromagnetism and antenna types. It also outlines the objectives, assumptions, and organization of the thesis.
Labeling the virus share malware dataset lessons learnedJohn Seymour
This document summarizes the process of labeling the VirusShare malware dataset for use in malware classification machine learning models. Key points include:
- The VirusShare dataset contains over 27 million malware samples that were labeled using the VirusTotal API by 30 people over 6 months to overcome rate limiting.
- An index was built using PySpark to list each malware label and the chunks it occurs in, to minimize download size and time for retrieving samples.
- Words of warning are provided about not using the labels to compare antivirus vendors and that ground truth may be noisy due to inconsistent antivirus labeling.
- Useful extensions discussed are adding timestamp data and implementing stemming to help address inconsistent labeling issues.
The document discusses various statistical procedures that can be used to analyze experimental data in SigmaPlot, including t-tests, ANOVAs, nonparametric tests, and more. It provides information on how to set up and conduct analyses, interpret results, and generate graphs and reports. The procedures can be used to compare groups, test for differences before and after treatments, perform predictions, and analyze relationships in the data.
The document discusses erbium-doped fiber lasers (EDFLs). EDFLs emit light at 1.55μm, which lies in the eye-safe region of the spectrum and is preferred for long-distance fiber optic communications. They consist of an optical fiber doped with erbium ions as the gain medium, pump lasers to excite the erbium ions, and dielectric mirrors or fiber Bragg gratings to form the optical resonator. EDFLs have revolutionized fiber optic communications and next generation versions may be integrated onto single chips.
Build Knowledge Graphs with Oracle RDF to Extract More Value from Your DataJean Ihm
AnD Summit '19 slides - Souri Das, Matthew Perry, Melli Annamalai. This presentation covers knowledge graphs built using the RDF capabilities of Oracle Spatial and Graph. We will illustrate how to define a knowledge graph, create virtual or materialized graphs from existing data (relational tables, CSV files, etc.), derive new knowledge through logical inference, navigate and query graphs using W3C standards, analyze knowledge graphs with graph algorithms, and more. Real-world use cases from various industries will also be shared.
The document discusses different types of software testing that scientists can use to validate code including smoke tests, assertion tests, and unit tests. It also recommends continuous integration to catch errors early. Additionally, it introduces Dr. Geoffrey Chang and his experience retracting a paper due to a software bug, but notes he has since published other papers and received research grants.
Evaluation of full brain parcellation schemes using the NeuroVault database o...Krzysztof Gorgolewski
Slides from a talk given at SfN 2016.
The task of dividing the human brain into regions has been captivating scientists for many years. In the following work we revisit this challenge and introduce a new evaluation technique that works for both cortical and subcortical parcellations. Our approach is based on data from a diverse set of cognitive experiments that employs nonparametric methods to account for smoothness and parcel size biases.
As reported before parcel variance was a function of parcel size in that smaller parcels were more likely to be homogenous (even in random data). However, when we used map-specific null distributions to account for both smoothness of statistical maps as well as number of parcels in atlases, unbiased estimates become apparent. Both Yeo et al. and Collins et al. parcellations produce scores for random data similar to those derived from real data. In contrast, Shen et al., AAL, and Gordon et al. show lower within parcel variance when applied to real data than when applied to random data (but no distinction can be made between them).
In addition to looking at within parcel variance we also applied a novel metric based on the intuition that different parts of the brain should not only be homogenous, but also different from each other. To quantify this we calculated a ratio of between and within parcel variances (standardized using individual null models). This approach indirectly penalizes parcellations with too many unnecessary parcels. Using this measure we show that Yeo et al. parcellation fits data better (Figure 1) than Collins et al. atlas despite having fewer parcels (7 vs 10).
We present a novel approach to evaluating atlases and parcellations of the human brain that captures diverse patterns observed across many cognitive studies. Our testing methodology overcomes biases introduced by the size of the parcels and smoothness of input data, but also, in contrast to previous methods, can be applied to whole brain volumetric data. We have found that in contrast to previous reports based on resting state cortico cortical connectivity Shen et al. and AAL atlases can delineate brain regions with above average accuracy.
This document discusses quality control for structural and functional MRI data. It explains that quality control is important for deciding which data to include in a study, diagnosing problems with data acquisition, and fixing issues before scanning new subjects. The document recommends performing quality control early and provides examples of checking for consistency across scans and subjects, as well as metrics to detect motion, artifacts, noise, and information quality in the data.
Towards open and reproducible neuroscience in the age of big dataKrzysztof Gorgolewski
This document discusses open and reproducible neuroscience in the age of big data. It highlights that studies that share their data tend to have higher statistical quality and higher citation rates. It introduces the Brain Imaging Data Structure (BIDS) for organizing neuroimaging and behavioral data and tools that use BIDS like MRIQC and FMRIPREP for automated analysis. BIDS aims to make more data accessible to more researchers through standardized file formats and derivatives.
The document outlines the procedures for performing an annual performance evaluation of a computer tomography (CT) scanner using an ACR CT phantom. It involves tests to evaluate positioning accuracy, CT number accuracy, slice thickness, low contrast resolution, high contrast resolution, image uniformity and noise, and distance measurement accuracy. The tests involve scanning the various modules of the ACR phantom using different protocols and recording measurements of CT numbers, slice thicknesses, smallest visible rods, uniformity, artifacts, and resolved bar patterns.
Data sharing in neuroimaging: incentives, tools, and challengesKrzysztof Gorgolewski
This document discusses the benefits and challenges of sharing neuroimaging data. It describes several large neuroimaging datasets including the NKI Enhanced dataset and the Human Connectome Project dataset. These datasets provide rich phenotypic and imaging data for many subjects. Data sharing allows data to be reused, saves costs, and is related to higher quality studies and increased citations. However, there are also fears around data sharing such as being scooped. The document advocates for starting small with data sharing and provides resources like NeuroVault to make sharing easy.
MRI Quality Control ACR Phantom PRO MRIAdhianto Dwi
Dokumen tersebut memberikan panduan pengujian kualitas citra MRI menggunakan phantom. Terdapat beberapa parameter yang diuji seperti ketepatan geometri, resolusi tinggi kontras, ketebalan slice, posisi slice, keseragaman intensitas, tingkat ghosting, dan detektifitas objek rendah kontras. Dilakukan pengukuran terhadap citra-citra hasil scanning phantom dengan menggunakan alat ukur dan rumus-rumus khusus.
This document discusses the benefits and considerations of pre-registration for scientific studies. It addresses common misconceptions about pre-registration and provides guidance on how to properly pre-register a study. Pre-registration involves outlining the key elements of a study such as hypotheses, methods, and analysis plan prior to conducting the research. This helps reduce biases and distinguishes exploratory from confirmatory research. The document recommends pre-registering studies using public repositories like AsPredicted.org to increase transparency and reproducibility in science.
The SAMPL 2015 workshop aimed to expose the SAMPL team's research activities in sub-Nyquist sampling and super-resolution applications. The workshop covered applications in medical imaging, communications, radar, and optics. It featured presentations on MRI, ultrasound, cognitive radio, and optical techniques from the SAMPL group and collaborators from academia and industry. The SAMPL vision is to tightly connect theory and engineering applications while training students in cutting-edge research.
Presentation by Stefanie Tompkins, director, Defense Science Office, Defense Advanced Research Projects Agency (DARPA), on Wednesday, June 1, 2016. This This presentation will give an overview of DARPA, working with DARPA and the Defense Sciences Office, and descriptions of some of the current activities DSO's program managers are working on.
DARPA’s mission is to make pivotal investments in breakthrough technologies for national security, thus catalyzing the development of capabilities that give the Nation new options for preventing and creating strategic surprise.
The Defense Sciences Office (DSO) is one of six technical offices at the agency.
DSO identifies and pursues high-risk, high-payoff fundamental research initiatives across a broad spectrum of science and engineering disciplines including materials science, computing and autonomy, engineering design and manufacturing, physics, chemistry, mathematics and social science.
Adrian Miron has over 25 years of experience in nuclear and radiological engineering, emergency planning, and research. He has expertise in fields such as nuclear engineering, emergency planning, modeling, data analysis, and software development. Miron has managed over $1 million in research grants and projects, developed emergency response plans and procedures, reviewed safety documents, and authored numerous publications. Currently, he works as an emergency planning coordinator and manages several projects around emergency preparedness software and procedures.
The document provides an agenda for the Canadian Visual Analytics School (CANVAS) summer program held from July 23-26, 2012 at Simon Fraser University. The agenda details the schedule of presentations, workshops, and activities over the 4 day program, including keynote speakers from universities and industry discussing topics such as visual analytics research, interaction science, healthcare analytics, and aircraft safety. The schedule also includes hands-on workshops and demonstrations of visual analytics tools, as well as social events like a reception dinner.
The document discusses opportunities for ubiquitous networks and passive network measurement activities. It describes the goals of creating an infrastructure to support network measurements, performance analysis, and tool development. It provides status updates on passive measurement deployments at various university and research sites.
Sequential Action Patterns in Collaborative Ontology Engineering Projects: A ...Philipp Singer
Simon Walk's talk at CIKM '14 about our paper titled "Sequential Action Patterns in Collaborative Ontology Engineering Projects: A Case-study in the Biomedical Domain"
This document provides information about the SPIE Smart Structures/NDE 2014 conference to be held March 9-13, 2014 in San Diego, California. The conference will include 10 parallel conferences covering topics related to smart structures, non-destructive evaluation, health monitoring, biomimetics, electroactive polymers, sensors, and more. It will feature invited talks, contributed talks, posters, and a special presentation from the San Diego Zoo on bioinspiration. Attendees are invited to submit abstracts by August 26, 2013 and the conference will include an exhibition and awards program.
This document discusses the history and fundamentals of visual odometry (VO) and simultaneous localization and mapping (SLAM). It provides an overview of key developments in VO from the 1980s to present day, including the first real-time VO implementation on a robot in 1980 and use of VO on the Mars rovers in 2004. The document also summarizes the differences between VO, SFM, and V-SLAM, and describes common approaches to feature extraction, motion estimation, and optimization in VO pipelines.
On the Malware Detection Problem: Challenges & Novel ApproachesMarcus Botacin
Marcus Botacin's PhD Defense at Federal University of Paraná (UFPR).
Advisor: Dr André Grégio
Co-Advisor: Paulo de Geus
Evaluation Committee:
Dr Leigh Metcalf, Dr Leyla Bilge, Daniel Alfonso Oliveira
This document summarizes the proceedings of the IDPSA-2012 Integrated Deterministic-Probabilistic Safety Analysis Workshop. It provides an overview of the workshop agenda and objectives, which were to discuss industry needs that could be addressed with IDPSA methods and to plan joint research activities. It summarizes 17 technical presentations given at the workshop on various applications of deterministic and probabilistic safety analysis methods and the integration of the two approaches. It also summarizes the discussions in 4 breakout sessions that aimed to clarify industry needs, priority pilot applications, and methodology development needs to facilitate the use of IDPSA. The workshop involved 50 experts from Europe and the US and helped develop ideas for an IDPSA project proposal within the
Acumen Scientific is a technology company that specializes in imaging, display, instrumentation, materials, and manufacturing technologies across the ultraviolet to infrared spectrum. They have expertise in detector and system design, fabrication, testing, failure analysis, and technology transfer. Their team of scientists and engineers have extensive experience in focal plane arrays, display technologies, camera and spectrograph instrumentation, and high temperature materials development. Acumen Scientific is able to provide end-to-end solutions from initial concept through design, manufacturing, and testing for scientific and aerospace applications.
The document presents an approach called Convolutional Analysis of code Metrics Evolution (CAME) that uses a convolutional neural network to detect anti-patterns by analyzing the historical evolution of source code metrics at the class level. An evaluation on 7 open-source systems shows that considering longer histories of metrics improves detection performance and that CAME outperforms other machine learning and anti-pattern detection techniques in terms of precision, recall, and F-measure.
The document discusses different types of software testing that scientists can use to validate code including smoke tests, assertion tests, and unit tests. It also recommends continuous integration to catch errors early. Additionally, it introduces Dr. Geoffrey Chang and his experience retracting a paper due to a software bug, but notes he has since published other papers and received research grants.
Evaluation of full brain parcellation schemes using the NeuroVault database o...Krzysztof Gorgolewski
Slides from a talk given at SfN 2016.
The task of dividing the human brain into regions has been captivating scientists for many years. In the following work we revisit this challenge and introduce a new evaluation technique that works for both cortical and subcortical parcellations. Our approach is based on data from a diverse set of cognitive experiments that employs nonparametric methods to account for smoothness and parcel size biases.
As reported before parcel variance was a function of parcel size in that smaller parcels were more likely to be homogenous (even in random data). However, when we used map-specific null distributions to account for both smoothness of statistical maps as well as number of parcels in atlases, unbiased estimates become apparent. Both Yeo et al. and Collins et al. parcellations produce scores for random data similar to those derived from real data. In contrast, Shen et al., AAL, and Gordon et al. show lower within parcel variance when applied to real data than when applied to random data (but no distinction can be made between them).
In addition to looking at within parcel variance we also applied a novel metric based on the intuition that different parts of the brain should not only be homogenous, but also different from each other. To quantify this we calculated a ratio of between and within parcel variances (standardized using individual null models). This approach indirectly penalizes parcellations with too many unnecessary parcels. Using this measure we show that Yeo et al. parcellation fits data better (Figure 1) than Collins et al. atlas despite having fewer parcels (7 vs 10).
We present a novel approach to evaluating atlases and parcellations of the human brain that captures diverse patterns observed across many cognitive studies. Our testing methodology overcomes biases introduced by the size of the parcels and smoothness of input data, but also, in contrast to previous methods, can be applied to whole brain volumetric data. We have found that in contrast to previous reports based on resting state cortico cortical connectivity Shen et al. and AAL atlases can delineate brain regions with above average accuracy.
This document discusses quality control for structural and functional MRI data. It explains that quality control is important for deciding which data to include in a study, diagnosing problems with data acquisition, and fixing issues before scanning new subjects. The document recommends performing quality control early and provides examples of checking for consistency across scans and subjects, as well as metrics to detect motion, artifacts, noise, and information quality in the data.
Towards open and reproducible neuroscience in the age of big dataKrzysztof Gorgolewski
This document discusses open and reproducible neuroscience in the age of big data. It highlights that studies that share their data tend to have higher statistical quality and higher citation rates. It introduces the Brain Imaging Data Structure (BIDS) for organizing neuroimaging and behavioral data and tools that use BIDS like MRIQC and FMRIPREP for automated analysis. BIDS aims to make more data accessible to more researchers through standardized file formats and derivatives.
The document outlines the procedures for performing an annual performance evaluation of a computer tomography (CT) scanner using an ACR CT phantom. It involves tests to evaluate positioning accuracy, CT number accuracy, slice thickness, low contrast resolution, high contrast resolution, image uniformity and noise, and distance measurement accuracy. The tests involve scanning the various modules of the ACR phantom using different protocols and recording measurements of CT numbers, slice thicknesses, smallest visible rods, uniformity, artifacts, and resolved bar patterns.
Data sharing in neuroimaging: incentives, tools, and challengesKrzysztof Gorgolewski
This document discusses the benefits and challenges of sharing neuroimaging data. It describes several large neuroimaging datasets including the NKI Enhanced dataset and the Human Connectome Project dataset. These datasets provide rich phenotypic and imaging data for many subjects. Data sharing allows data to be reused, saves costs, and is related to higher quality studies and increased citations. However, there are also fears around data sharing such as being scooped. The document advocates for starting small with data sharing and provides resources like NeuroVault to make sharing easy.
MRI Quality Control ACR Phantom PRO MRIAdhianto Dwi
Dokumen tersebut memberikan panduan pengujian kualitas citra MRI menggunakan phantom. Terdapat beberapa parameter yang diuji seperti ketepatan geometri, resolusi tinggi kontras, ketebalan slice, posisi slice, keseragaman intensitas, tingkat ghosting, dan detektifitas objek rendah kontras. Dilakukan pengukuran terhadap citra-citra hasil scanning phantom dengan menggunakan alat ukur dan rumus-rumus khusus.
This document discusses the benefits and considerations of pre-registration for scientific studies. It addresses common misconceptions about pre-registration and provides guidance on how to properly pre-register a study. Pre-registration involves outlining the key elements of a study such as hypotheses, methods, and analysis plan prior to conducting the research. This helps reduce biases and distinguishes exploratory from confirmatory research. The document recommends pre-registering studies using public repositories like AsPredicted.org to increase transparency and reproducibility in science.
The SAMPL 2015 workshop aimed to expose the SAMPL team's research activities in sub-Nyquist sampling and super-resolution applications. The workshop covered applications in medical imaging, communications, radar, and optics. It featured presentations on MRI, ultrasound, cognitive radio, and optical techniques from the SAMPL group and collaborators from academia and industry. The SAMPL vision is to tightly connect theory and engineering applications while training students in cutting-edge research.
Presentation by Stefanie Tompkins, director, Defense Science Office, Defense Advanced Research Projects Agency (DARPA), on Wednesday, June 1, 2016. This This presentation will give an overview of DARPA, working with DARPA and the Defense Sciences Office, and descriptions of some of the current activities DSO's program managers are working on.
DARPA’s mission is to make pivotal investments in breakthrough technologies for national security, thus catalyzing the development of capabilities that give the Nation new options for preventing and creating strategic surprise.
The Defense Sciences Office (DSO) is one of six technical offices at the agency.
DSO identifies and pursues high-risk, high-payoff fundamental research initiatives across a broad spectrum of science and engineering disciplines including materials science, computing and autonomy, engineering design and manufacturing, physics, chemistry, mathematics and social science.
Adrian Miron has over 25 years of experience in nuclear and radiological engineering, emergency planning, and research. He has expertise in fields such as nuclear engineering, emergency planning, modeling, data analysis, and software development. Miron has managed over $1 million in research grants and projects, developed emergency response plans and procedures, reviewed safety documents, and authored numerous publications. Currently, he works as an emergency planning coordinator and manages several projects around emergency preparedness software and procedures.
The document provides an agenda for the Canadian Visual Analytics School (CANVAS) summer program held from July 23-26, 2012 at Simon Fraser University. The agenda details the schedule of presentations, workshops, and activities over the 4 day program, including keynote speakers from universities and industry discussing topics such as visual analytics research, interaction science, healthcare analytics, and aircraft safety. The schedule also includes hands-on workshops and demonstrations of visual analytics tools, as well as social events like a reception dinner.
The document discusses opportunities for ubiquitous networks and passive network measurement activities. It describes the goals of creating an infrastructure to support network measurements, performance analysis, and tool development. It provides status updates on passive measurement deployments at various university and research sites.
Sequential Action Patterns in Collaborative Ontology Engineering Projects: A ...Philipp Singer
Simon Walk's talk at CIKM '14 about our paper titled "Sequential Action Patterns in Collaborative Ontology Engineering Projects: A Case-study in the Biomedical Domain"
This document provides information about the SPIE Smart Structures/NDE 2014 conference to be held March 9-13, 2014 in San Diego, California. The conference will include 10 parallel conferences covering topics related to smart structures, non-destructive evaluation, health monitoring, biomimetics, electroactive polymers, sensors, and more. It will feature invited talks, contributed talks, posters, and a special presentation from the San Diego Zoo on bioinspiration. Attendees are invited to submit abstracts by August 26, 2013 and the conference will include an exhibition and awards program.
This document discusses the history and fundamentals of visual odometry (VO) and simultaneous localization and mapping (SLAM). It provides an overview of key developments in VO from the 1980s to present day, including the first real-time VO implementation on a robot in 1980 and use of VO on the Mars rovers in 2004. The document also summarizes the differences between VO, SFM, and V-SLAM, and describes common approaches to feature extraction, motion estimation, and optimization in VO pipelines.
On the Malware Detection Problem: Challenges & Novel ApproachesMarcus Botacin
Marcus Botacin's PhD Defense at Federal University of Paraná (UFPR).
Advisor: Dr André Grégio
Co-Advisor: Paulo de Geus
Evaluation Committee:
Dr Leigh Metcalf, Dr Leyla Bilge, Daniel Alfonso Oliveira
This document summarizes the proceedings of the IDPSA-2012 Integrated Deterministic-Probabilistic Safety Analysis Workshop. It provides an overview of the workshop agenda and objectives, which were to discuss industry needs that could be addressed with IDPSA methods and to plan joint research activities. It summarizes 17 technical presentations given at the workshop on various applications of deterministic and probabilistic safety analysis methods and the integration of the two approaches. It also summarizes the discussions in 4 breakout sessions that aimed to clarify industry needs, priority pilot applications, and methodology development needs to facilitate the use of IDPSA. The workshop involved 50 experts from Europe and the US and helped develop ideas for an IDPSA project proposal within the
Acumen Scientific is a technology company that specializes in imaging, display, instrumentation, materials, and manufacturing technologies across the ultraviolet to infrared spectrum. They have expertise in detector and system design, fabrication, testing, failure analysis, and technology transfer. Their team of scientists and engineers have extensive experience in focal plane arrays, display technologies, camera and spectrograph instrumentation, and high temperature materials development. Acumen Scientific is able to provide end-to-end solutions from initial concept through design, manufacturing, and testing for scientific and aerospace applications.
The document presents an approach called Convolutional Analysis of code Metrics Evolution (CAME) that uses a convolutional neural network to detect anti-patterns by analyzing the historical evolution of source code metrics at the class level. An evaluation on 7 open-source systems shows that considering longer histories of metrics improves detection performance and that CAME outperforms other machine learning and anti-pattern detection techniques in terms of precision, recall, and F-measure.
Dr. Amir H. Sanjari has extensive experience in experimental particle, nuclear, and radiation physics research through positions in academia and laboratories. He has achievements in research, obtaining grants, teaching, management, and developing radiation devices. His background includes simulation work, data analysis, and coordinating large international collaborations.
Dr. Dattatreya Rachakonda has over 20 years of experience in academia and industry. He has a Ph.D in physics from IISc and has worked at IBM for 7 years managing modeling programs. He is currently a professor teaching undergraduate physics. He has extensive experience in areas such as semiconductor physics, fluid mechanics, and computational modeling.
Reliability based design and acceptance protocol for driven pilesAlexASquare1
This dissertation focuses on developing a reliability-based design and acceptance protocol for driven pile foundations using load and resistance factor design (LRFD). The current Arkansas specifications have limitations that result in designs of questionable reliability. The research assembles a database of pile load tests and develops a MATLAB program to compute resistance factors using reliability methods like first-order second moment and Monte Carlo simulation. It also addresses updating resistance factors when new load tests are added using Bayesian techniques. A full-scale pile load testing program is recommended to verify the developed protocol.
This document discusses research on using Doppler radar systems to monitor vital signs and identify individuals based on their respiratory signatures, while addressing challenges from extraneous motion. It first provides background on short-distance radar and Doppler physiological radar. It then discusses sources of motion artifact noise and methods explored in literature to isolate biological motion signals from noise, including adaptive noise cancellation and AC/DC coupling. Experiments are described that use an optical tracking camera and RF tags to study motion compensation techniques. The goal of suppressing extraneous motion to accurately classify biological signals for health monitoring and unique identification is discussed.
Autogenous Diabetic Retinopathy Censor for Ophthalmologists - AKSHIAsiri Wijesinghe
Full-fledged autogenous censor for classifying severity level of Diabetic Retinopathy (based on retinal lesions) and detecting retinal vascular network to assessment of vessel tortuousness to identify abnormal vessels in human retina.
Ronald J Livesay is a physicist with expertise in applied physics, radiation measurement, data analysis, and nuclear security. He received his PhD in Applied Physics from Colorado School of Mines in 2007. He has over 10 years of experience at Oak Ridge National Laboratory, where he conducted research on radiation detection systems, developed data analysis techniques, and patented inventions related to radiation monitoring. He now co-founded Mason Livesay Scientific, LLC in 2012 where he leads research on radiation detector performance and data analysis automation.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
Embracing Deep Variability For Reproducibility and Replicability
Abstract: Reproducibility (aka determinism in some cases) constitutes a fundamental aspect in various fields of computer science, such as floating-point computations in numerical analysis and simulation, concurrency models in parallelism, reproducible builds for third parties integration and packaging, and containerization for execution environments. These concepts, while pervasive across diverse concerns, often exhibit intricate inter-dependencies, making it challenging to achieve a comprehensive understanding. In this short and vision paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (eg language, libraries, compiler, virtual machine, OS, environment variables, etc). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
https://hal.science/hal-04582287
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
1. Stanford University
Focus group: MRIQC
Quality Control of structural and functional MRI
Oscar Esteban <oesteban@stanford.edu>
Poldrack Lab, Stanford University
January 13th
, 2017
3. Stanford University
2/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
But sometimes, we are way too far from perfection
(MRIQC mosaic, courtesy of Joke Durnez)
10. Stanford University
9/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Manual assessment on the ABIDE dataset (N=1102)
- Time consuming
- Intra-rater bias
- Inter-rater bias
- Rater 1: 15% reject
11. Stanford University
10/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Objectives of Quality Control
Exclusion criteria – as objective as possible.
Quality Badge – Deciding on using a public dataset (is it appropriate for my
design/study?)
Diagnosing fixable problems with data acquisition process:
Types of sequences
Scanner malfunctions
Head padding
Participant instructions
12. Stanford University
11/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Image Quality Metrics (IQMs)
Physical phantoms (Price et al., 1990)
No-reference Image Quality Metrics (IQMs) (Woodard and Carley-Spencer,
2006)
Aim at artifacts and analyze noise distribution (Mortamet et al., 2009)
Combined general volumetric and artifact-targeted IQMs (Pizarro et al., 2016)
13. Stanford University
11/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Image Quality Metrics (IQMs)
Physical phantoms (Price et al., 1990)
No-reference Image Quality Metrics (IQMs) (Woodard and Carley-Spencer,
2006)
Aim at artifacts and analyze noise distribution (Mortamet et al., 2009)
Combined general volumetric and artifact-targeted IQMs (Pizarro et al., 2016)
14. Stanford University
11/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Image Quality Metrics (IQMs)
Physical phantoms (Price et al., 1990)
No-reference Image Quality Metrics (IQMs) (Woodard and Carley-Spencer,
2006)
Aim at artifacts and analyze noise distribution (Mortamet et al., 2009)
Combined general volumetric and artifact-targeted IQMs (Pizarro et al., 2016)
15. Stanford University
11/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Image Quality Metrics (IQMs)
Physical phantoms (Price et al., 1990)
No-reference Image Quality Metrics (IQMs) (Woodard and Carley-Spencer,
2006)
Aim at artifacts and analyze noise distribution (Mortamet et al., 2009)
Combined general volumetric and artifact-targeted IQMs (Pizarro et al., 2016)
16. Stanford University
12/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
IQMs: Structural MRI
Noise measurement
Signal-to-noise ratio (SNR) - higher is better
Contrast-to-noise ration (CNR) - higher is better
Sharpness (full-width half maximum estimations) - smaller FWHM is
better
Goodness of fit of a noise model into the noise in the background (QI2) -
lower is better (Mortamet et al., 2009)
Coefficient of Joint Variation (CJV) - lower is better
Information theory
Foreground-Background Energy Ratio (FBER) - higher is better
Entropy Focus Criterion (EFC) - lower is better
Artifacts
Segmentation using mathematical morphology (QI1) - lower is better
Measurements on the estimated INU (intensity non-uniformity) - values
around 1.0
Partial Volume Errors (PVE) - lower is better
Other: summary statistics, intracranial volume fractions (ICV)
17. Stanford University
13/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
IQMs: Functional MRI
Noise measurement: SNR, tSNR, temporal standard deviation
Information theory: EFC, FBER
Confounds and artifacts:
Framewise Displacement (FD) - lower is better
(Standardized) DVARS (D referring to temporal derivative of timecourses,
VARS referring to RMS variance over voxels) - lower is better
Ghost-to-Signal ratio (GSR) - lower is better
Global correlation (GCOR) - lower is better
Spikes (high frequency and global intensity)
AFNI’s outlier detection and quality indexes
18. Stanford University
14/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Design of MRIQC
Inputs: BIDS (Gorgolewski et al.,
2016b)
Command-line interface: BIDS-Apps
(Gorgolewski et al., 2016a)
The simplest possible pipeline
The fastest possible pipeline
Robust: works on all data
19. Stanford University
15/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
MRIQC Features
What can be expected from MRIQC:
A table of IQMs per subject
The group visual report
An individual visual report per subject
A first-round exercise for the data
What is not expected from MRIQC:
The triage of participants (WIP)
The derivatives of processing
Non-standard morphologies: developing brains, pathology, etc.
20. Stanford University
15/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
MRIQC Features
What can be expected from MRIQC:
A table of IQMs per subject
The group visual report
An individual visual report per subject
A first-round exercise for the data
What is not expected from MRIQC:
The triage of participants (WIP)
The derivatives of processing
Non-standard morphologies: developing brains, pathology, etc.
30. Stanford University
25/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
Questions TBD in the focus group
Are there any additional quality metrics that you would like to be added?
Are there any additional plots that you would like to be added?
Would you like to have diffusion MRI IQMs and reports?
Would you like to participate in manual triage/rating sessions of s/f/d MRI?
33. Stanford University
28/28
Introduction MRIQC Visual reports Running MRIQC Questions References References
References I
Gorgolewski, Krzysztof J. et al. (2016a). “BIDS Apps: Improving ease of use, accessibility and reproducibility of neuroimaging data analysis methods”. en. In: bioRxiv, p. 079145. DOI:
10.1101/079145. URL: http://biorxiv.org/content/early/2016/10/05/079145.
Gorgolewski, Krzysztof J. et al. (2016b). “The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments”. In: Scientific Data 3, p. 160044.
ISSN: 2052-4463. DOI: 10.1038/sdata.2016.44. URL: http://www.nature.com/articles/sdata201644.
Mortamet, BÃľnÃľdicte et al. (2009). “Automatic quality assessment in structural brain magnetic resonance imaging”. en. In: Magnetic Resonance in Medicine 62.2, pp. 365–372. ISSN:
1522-2594. DOI: 10.1002/mrm.21992. URL: http://onlinelibrary.wiley.com/doi/10.1002/mrm.21992/abstract.
Pizarro, Ricardo A. et al. (2016). “Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm”. English. In:
Frontiers in Neuroinformatics 10. ISSN: 1662-5196. DOI: 10.3389/fninf.2016.00052. URL:
http://journal.frontiersin.org/article/10.3389/fninf.2016.00052/abstract.
Price, Ronald R. et al. (1990). “Quality assurance methods and phantoms for magnetic resonance imaging: Report of AAPM nuclear magnetic resonance Task Group No. 1”. In: Medical
Physics 17.2, pp. 287–295. ISSN: 0094-2405. DOI: 10.1118/1.596566. URL:
http://scitation.aip.org/content/aapm/journal/medphys/17/2/10.1118/1.596566.
Woodard, Jeffrey P. and Monica P. Carley-Spencer (2006). “No-Reference image quality metrics for structural MRI”. en. In: Neuroinformatics 4.3, pp. 243–262. ISSN: 1539-2791, 1559-0089.
DOI: 10.1385/NI:4:3:243. URL: http://link.springer.com/article/10.1385/NI:4:3:243.