Competent Consulting Services provides laser and photonics consulting services including:
- Product design, problem solving, modeling and simulations, independent expertise, and staff training.
- Consulting for manufacturers of laser systems, users of optical technology, and investors. Technical areas include lasers, nonlinear optics, fiber optics and more.
- Services include laser design, troubleshooting, software development, proposal writing, and staff training courses tailored to customers' needs.
The document outlines an assignment on genetic engineering and DNA technology. It discusses examining the techniques of genetic engineering, manipulating DNA, and investigative opportunities with scientific techniques. Students will research and describe basic DNA techniques like extraction, electrophoresis, PCR, and cell transformation. They will write lab manuals for two procedures and explain the reasons for key steps in each technique. Finally, students will discuss practical limitations of each technique and how they can be overcome.
This document outlines an assignment on genetic engineering. Students are asked to research and describe two examples of genetic engineering - one related to crop production and one to medical science. They will create a front page covering the basic scientific principles and then profiles of each example, describing what was altered and the rationale. Students must also explain the commercial, social, and ethical concerns of each example. Finally, students must evaluate the benefits and drawbacks of using genetic engineering for each application. The assignment is assessed based on criteria for a pass, merit, and distinction grade.
This letter recommends Mr. Baoguang Xu for a job application. It details that the author has known Mr. Xu for 3 years as his professor and supervisor for his M.A.Sc research project on developing an artificial intelligent eddy current crack detection system. It states that Mr. Xu has demonstrated solid background knowledge and skills in both mechanical systems and data processing, and has published work in conferences. The letter rates Mr. Xu's performance as excellent and says he is highly motivated, hardworking, and will succeed in his future career.
On the large scale of studying dynamics with MEG: Lessons learned from the Hu...Robert Oostenveld
As part of the Human Connectome Project (HCP), which includes high-quality fMRI, anatomical MRI, DTi and genetic data from 1200 subjects, we have scanned and investigated a subset of 100 subjects (mostly comprised of pairs of twins) using MEG. The raw data acquired in the HCP has been analyzed using standard pipelines [ref1] and both raw and results at various levels of processing have been shared though the ConnectomeDB [ref2].
Throughout the process of the HCP we have not only analyzed (resting state) MEG data, but also have developed the data analysis protocols, the software and the strategies to achieve reproducible MEG connectivity results. The MEG data analysis software is based on FieldTrip, an open source toolbox [ref3], and is shared alongside the data to allow the analyses to be repeated on independent data.
In this presentation I will outline what the HCP MEG team has learned along the way and I will provide recommendations on what to do and what to avoid in making MEG studies on (resting state) connectivity more reproducible.
1. Larson-Prior LJ, Oostenveld R, Della Penna S, Michalareas G, Prior F, Babajani-Feremi A, Schoffelen JM, Marzetti L, de Pasquale F, Di Pompeo F, Stout J, Woolrich M, Luo Q, Bucholz R, Fries P, Pizzella V, Romani GL, Corbetta M, Snyder AZ; WU-Minn HCP Consortium. Adding dynamics to the Human Connectome Project with MEG. Neuroimage, 2013.
doi:10.1016/j.neuroimage.2013.05.056
2. Hodge MR, Horton W, Brown T, Herrick R, Olsen T, Hileman ME, McKay M, Archie KA, Cler E, Harms MP, Burgess GC, Glasser MF, Elam JS, Curtiss SW, Barch DM, Oostenveld R, Larson-Prior LJ, Ugurbil K, Van Essen DC, Marcus DS. ConnectomeDB-Sharing human brain connectivity data. Neuroimage, 2016. doi:10.1016/j.neuroimage.2015.04.046
3. Oostenveld R, Fries P, Maris E, Schoffelen JM. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput Intell Neurosci. 2011. doi:10.1155/2011/156869
This job posting is seeking an Instrumentation Technician to join a group of environmental scientists in Lancaster. The main responsibilities of the role include calibrating and maintaining electronic field and laboratory sensors, designing and building new electronic equipment, programming data loggers, and assisting with fieldwork. The ideal candidate will have a degree or equivalent qualification in engineering or science, experience in electronics, an aptitude for programming, and the ability to problem solve independently.
Lead dbs Workshop 2020 Brisbane ProgrammeAndreas Horn
The document describes a 2-day workshop on deep brain stimulation (DBS) neuroimaging techniques using the Lead-DBS software package. The workshop will cover topics like electrode localization, spatial normalization, connectivity mapping, and will provide hands-on sessions for participants to practice techniques on their own data. Attendees should have some experience with MATLAB and Lead-DBS and bring their own laptops with required software installed. The agenda includes sessions on imaging pipelines, connectomics, troubleshooting, and group analysis methods.
Building a Knowledge Graph with Spark and NLP: How We Recommend Novel Drugs t...Databricks
It is widely known that the discovery, development, and commercialization of new classes of drugs can take 10-15 years and greater than $5 billion in R&D investment only to see less than 5% of the drugs make it to market.
AstraZeneca is a global, innovation-driven biopharmaceutical business that focuses on the discovery, development, and commercialization of prescription medicines for some of the world’s most serious diseases. Our scientists have been able to improve our success rate over the past 5 years by moving to a data-driven approach (the “5R”) to help develop better drugs faster, choose the right treatment for a patient and run safer clinical trials.
However, our scientists are still unable to make these decisions with all of the available scientific information at their fingertips. Data is sparse across our company as well as external public databases, every new technology requires a different data processing pipeline and new data comes at an increasing pace. It is often repeated that a new scientific paper appears every 30 seconds, which makes it impossible for any individual expert to keep up-to-date with the pace of scientific discovery.
To help our scientists integrate all of this information and make targeted decisions, we have used Spark on Azure Databricks to build a knowledge graph of biological insights and facts. The graph powers a recommendation system which enables any AZ scientist to generate novel target hypotheses, for any disease, leveraging all of our data.
In this talk, I will describe the applications of our knowledge graph and focus on the Spark pipelines we built to quickly assemble and create projections of the graph from 100s of sources. I will also describe the NLP pipelines we have built – leveraging spacy, bioBERT or snorkel – to reliably extract meaningful relations between entities and add them to our knowledge graph.
Competent Consulting Services provides laser and photonics consulting services including:
- Product design, problem solving, modeling and simulations, independent expertise, and staff training.
- Consulting for manufacturers of laser systems, users of optical technology, and investors. Technical areas include lasers, nonlinear optics, fiber optics and more.
- Services include laser design, troubleshooting, software development, proposal writing, and staff training courses tailored to customers' needs.
The document outlines an assignment on genetic engineering and DNA technology. It discusses examining the techniques of genetic engineering, manipulating DNA, and investigative opportunities with scientific techniques. Students will research and describe basic DNA techniques like extraction, electrophoresis, PCR, and cell transformation. They will write lab manuals for two procedures and explain the reasons for key steps in each technique. Finally, students will discuss practical limitations of each technique and how they can be overcome.
This document outlines an assignment on genetic engineering. Students are asked to research and describe two examples of genetic engineering - one related to crop production and one to medical science. They will create a front page covering the basic scientific principles and then profiles of each example, describing what was altered and the rationale. Students must also explain the commercial, social, and ethical concerns of each example. Finally, students must evaluate the benefits and drawbacks of using genetic engineering for each application. The assignment is assessed based on criteria for a pass, merit, and distinction grade.
This letter recommends Mr. Baoguang Xu for a job application. It details that the author has known Mr. Xu for 3 years as his professor and supervisor for his M.A.Sc research project on developing an artificial intelligent eddy current crack detection system. It states that Mr. Xu has demonstrated solid background knowledge and skills in both mechanical systems and data processing, and has published work in conferences. The letter rates Mr. Xu's performance as excellent and says he is highly motivated, hardworking, and will succeed in his future career.
On the large scale of studying dynamics with MEG: Lessons learned from the Hu...Robert Oostenveld
As part of the Human Connectome Project (HCP), which includes high-quality fMRI, anatomical MRI, DTi and genetic data from 1200 subjects, we have scanned and investigated a subset of 100 subjects (mostly comprised of pairs of twins) using MEG. The raw data acquired in the HCP has been analyzed using standard pipelines [ref1] and both raw and results at various levels of processing have been shared though the ConnectomeDB [ref2].
Throughout the process of the HCP we have not only analyzed (resting state) MEG data, but also have developed the data analysis protocols, the software and the strategies to achieve reproducible MEG connectivity results. The MEG data analysis software is based on FieldTrip, an open source toolbox [ref3], and is shared alongside the data to allow the analyses to be repeated on independent data.
In this presentation I will outline what the HCP MEG team has learned along the way and I will provide recommendations on what to do and what to avoid in making MEG studies on (resting state) connectivity more reproducible.
1. Larson-Prior LJ, Oostenveld R, Della Penna S, Michalareas G, Prior F, Babajani-Feremi A, Schoffelen JM, Marzetti L, de Pasquale F, Di Pompeo F, Stout J, Woolrich M, Luo Q, Bucholz R, Fries P, Pizzella V, Romani GL, Corbetta M, Snyder AZ; WU-Minn HCP Consortium. Adding dynamics to the Human Connectome Project with MEG. Neuroimage, 2013.
doi:10.1016/j.neuroimage.2013.05.056
2. Hodge MR, Horton W, Brown T, Herrick R, Olsen T, Hileman ME, McKay M, Archie KA, Cler E, Harms MP, Burgess GC, Glasser MF, Elam JS, Curtiss SW, Barch DM, Oostenveld R, Larson-Prior LJ, Ugurbil K, Van Essen DC, Marcus DS. ConnectomeDB-Sharing human brain connectivity data. Neuroimage, 2016. doi:10.1016/j.neuroimage.2015.04.046
3. Oostenveld R, Fries P, Maris E, Schoffelen JM. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput Intell Neurosci. 2011. doi:10.1155/2011/156869
This job posting is seeking an Instrumentation Technician to join a group of environmental scientists in Lancaster. The main responsibilities of the role include calibrating and maintaining electronic field and laboratory sensors, designing and building new electronic equipment, programming data loggers, and assisting with fieldwork. The ideal candidate will have a degree or equivalent qualification in engineering or science, experience in electronics, an aptitude for programming, and the ability to problem solve independently.
Lead dbs Workshop 2020 Brisbane ProgrammeAndreas Horn
The document describes a 2-day workshop on deep brain stimulation (DBS) neuroimaging techniques using the Lead-DBS software package. The workshop will cover topics like electrode localization, spatial normalization, connectivity mapping, and will provide hands-on sessions for participants to practice techniques on their own data. Attendees should have some experience with MATLAB and Lead-DBS and bring their own laptops with required software installed. The agenda includes sessions on imaging pipelines, connectomics, troubleshooting, and group analysis methods.
Building a Knowledge Graph with Spark and NLP: How We Recommend Novel Drugs t...Databricks
It is widely known that the discovery, development, and commercialization of new classes of drugs can take 10-15 years and greater than $5 billion in R&D investment only to see less than 5% of the drugs make it to market.
AstraZeneca is a global, innovation-driven biopharmaceutical business that focuses on the discovery, development, and commercialization of prescription medicines for some of the world’s most serious diseases. Our scientists have been able to improve our success rate over the past 5 years by moving to a data-driven approach (the “5R”) to help develop better drugs faster, choose the right treatment for a patient and run safer clinical trials.
However, our scientists are still unable to make these decisions with all of the available scientific information at their fingertips. Data is sparse across our company as well as external public databases, every new technology requires a different data processing pipeline and new data comes at an increasing pace. It is often repeated that a new scientific paper appears every 30 seconds, which makes it impossible for any individual expert to keep up-to-date with the pace of scientific discovery.
To help our scientists integrate all of this information and make targeted decisions, we have used Spark on Azure Databricks to build a knowledge graph of biological insights and facts. The graph powers a recommendation system which enables any AZ scientist to generate novel target hypotheses, for any disease, leveraging all of our data.
In this talk, I will describe the applications of our knowledge graph and focus on the Spark pipelines we built to quickly assemble and create projections of the graph from 100s of sources. I will also describe the NLP pipelines we have built – leveraging spacy, bioBERT or snorkel – to reliably extract meaningful relations between entities and add them to our knowledge graph.
LABORATORY INFORMATION SYSTEM RADIOLOGY INFORMATION SYSTEMAj Raj
The document discusses laboratory information systems (LIS) and radiology information systems (RIS). LIS are used to manage data from laboratory instruments and tests, while RIS are used to manage radiology workflow and imaging data. Both systems aim to optimize operations through electronic data collection, analysis, and reporting. They integrate with other systems and aim to streamline processes like test ordering, results reporting, and image storage and retrieval. Picture archiving and communication systems (PACS) are also discussed as a key part of RIS for storing and sharing medical images across a healthcare system.
This presentation was provided by Dr. Paul Burton of the University of Bristol during the NISO Symposium, Privacy Implications of Research Data, held on September 11, 2016, in conjunction with the International Data Week in Denver, Colorado.
If Big Data is data that exceeds the processing capacity of conventional systems, thereby necessitating alternative processing measures, we are looking at an essentially technological challenge that IT managers are best equipped to address.
The DCC is currently working with 18 HEIs to support and develop their capabilities in the management of research data and, whilst the aforementioned challenge is not usually core to their expressed concerns, are there particular issues of curation inherent to Big Data that might force a different perspective?
We have some understanding of Big Data from our contacts in the Astronomy and High Energy Physics domains, and the scale and speed of development in Genomics data generation is well known, but the inability to provide sufficient processing capacity is not one of their more frequent complaints.
That’s not to say that Big Science and its Big Data are free of challenges in data curation; only that they are shared with their lesser cousins, where one might say that the real challenge is less one of size than diversity and complexity.
This brief presentation explores those aspects of data curation that go beyond the challenges of processing power but which may lend a broader perspective to the technology selection process.
The document discusses the Genome in a Bottle Consortium's bioinformatics working group meeting in August 2013. It summarizes the Consortium's data release policy of making sequence data, alignments, variant calls, and analysis software publicly available. The group discussed previous analyses that have been performed on the reference material NA12878, upcoming analyses from various groups, considerations for integrating multiple datasets, organizing data on the NCBI ftp site, and reanalyzing reference materials as new sequence data is submitted.
The document describes the development of a software tool called the TCE Selector that seamlessly integrates radiology teaching databases with PACS clients according to the IHE TCE integration profile. The TCE Selector allows radiologists to select images from PACS, add metadata, and export teaching files to databases like MIRC. Over 700 teaching files were created in the first year with high user acceptance. Further development could focus on more educational content and interdisciplinary collaboration.
The Center for Applied Optimization at the University of Florida conducts interdisciplinary research in optimization involving faculty from various departments. Over the past 5 years, their research has included global optimization, optimization in biomedicine like predicting epileptic seizures, analyzing massive datasets like social networks, developing approximation algorithms, and algorithms for problems like multicast networks. Current projects also involve computational neuroscience, probabilistic classifiers in medicine, research on energy problems, and using Raman spectroscopy for cancer research. The Center collaborates with researchers from other institutions and hosts many visiting scholars each year.
Open science can contribute to AI trustworthiness. This talk is a categorization of scientific data platforms, and a framing of AI trustworthiness with pointers to open science contributions.
CDISC is a non-profit organization that establishes clinical research data standards to support data acquisition, exchange, and submission. It has developed several standards including CDASH, which aims to standardize data collection fields across clinical trials to streamline data analysis and reduce errors. CDASH defines a set of common safety domains and variables that can be collected consistently across studies in a standardized way. This helps analyze data more efficiently, reduces training time for sites, and decreases potential errors from inconsistent data collection.
The document describes ComRAD, an intelligent information and expert system for the healthcare domain. ComRAD provides medical specialists with advice for optimal treatment decisions. It consists of a Universal Knowledge Base (UKB), Operating Data Base (ODB), and CoSMoS managing and design software. The ODB contains patients' electronic health records while the UKB contains medical science information and resources. CoSMoS utilizes algorithms to generate forms, design treatment variants, and enable intrasystem communication between components. ComRAD aims to provide optimal, personalized treatment recommendations based on all relevant scientific and patient-specific factors.
A clinical data repository (CDR) consolidates clinical data from multiple sources to provide a unified view of individual patients. It integrates non-uniform data like lab results, demographics, prescriptions, and reports. Challenges to implementing a CDR include storage capacity, computing power, reliability, and connecting diverse data sources. Effective CDRs require standard data formats, verification of integrated data, and the ability to generate analytical reports from the consolidated data warehouse.
The Genome in a Bottle Consortium is developing well-characterized reference genomes and methods to assess confidence in whole genome variant calls. They have generated data from multiple sequencing technologies for several reference genomes, including NA12878. They are developing integrated variant call sets and evaluating structural variants. The consortium is also working with the Global Alliance for Genomics and Health on benchmarking tools and metrics to evaluate variant caller performance.
These are the slides presented by Darren Price in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also http://www.cam-can.org
This document discusses the challenges and opportunities biology faces with increasing data generation. It outlines four key points:
1) Research approaches for analyzing infinite genomic data streams, such as digital normalization which compresses data while retaining information.
2) The need for usable software and decentralized infrastructure to perform real-time, streaming data analysis.
3) The importance of open science and reproducibility given most researchers cannot replicate their own computational analyses.
4) The lack of data analysis training in biology and efforts at UC Davis to address this through workshops and community building.
ImmPort strategies to enhance discoverability of clinical trial dataBarry Smith
Describes strategies for submission of clinical trial data to the NIAID Immunology Database and Analysis Portal in order to advance discoverability, comparability and analysis
Rebecca Mareck graduated from Drexel University in 2016 with a Master of Science in Biomedical Engineering and a Bachelor of Science in Biomedical Engineering with a minor in Electrical Engineering. She maintained a high GPA of 3.5 and received various academic honors and scholarships. Her experience includes research positions at ALS Hope Foundation and Merck, developing assistive technologies, analyzing biomedical data, and troubleshooting manufacturing issues. She also interned at Herley Industries, designing test equipment and repairing systems. Rebecca is proficient in engineering software and labs skills, and is active in professional organizations.
The document discusses handling corporate data and information management. It covers objectives like data resource management, DBMS, data warehousing, and data mining. It describes the sources and types of information an organization uses, both formal sources like internal records and external reports, as well as informal sources like conversations. It also discusses database management systems, data models, data warehousing, and data mining - how organizations use these approaches to collect, process, analyze and extract useful information from their data.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
- Deep brain stimulation (DBS) of the subthalamic nucleus (STN) in Parkinson's disease patients affects both motor and non-motor functions through interactions with motor, associative, and limbic networks in the basal ganglia.
- Connectomics analysis using fiber tracking from DBS electrode locations can help explain individual variability in behavioral effects of STN-DBS across different cognitive tasks. Specifically, it shows that stimulation of fibers connecting regions like the pre-SMA can predict detriments in stopping behavior.
- Stimulation of prefrontal fibers bypassing the STN that connect to brainstem regions has been linked to worsening of depressive symptoms after surgery through connectomics analysis.
LABORATORY INFORMATION SYSTEM RADIOLOGY INFORMATION SYSTEMAj Raj
The document discusses laboratory information systems (LIS) and radiology information systems (RIS). LIS are used to manage data from laboratory instruments and tests, while RIS are used to manage radiology workflow and imaging data. Both systems aim to optimize operations through electronic data collection, analysis, and reporting. They integrate with other systems and aim to streamline processes like test ordering, results reporting, and image storage and retrieval. Picture archiving and communication systems (PACS) are also discussed as a key part of RIS for storing and sharing medical images across a healthcare system.
This presentation was provided by Dr. Paul Burton of the University of Bristol during the NISO Symposium, Privacy Implications of Research Data, held on September 11, 2016, in conjunction with the International Data Week in Denver, Colorado.
If Big Data is data that exceeds the processing capacity of conventional systems, thereby necessitating alternative processing measures, we are looking at an essentially technological challenge that IT managers are best equipped to address.
The DCC is currently working with 18 HEIs to support and develop their capabilities in the management of research data and, whilst the aforementioned challenge is not usually core to their expressed concerns, are there particular issues of curation inherent to Big Data that might force a different perspective?
We have some understanding of Big Data from our contacts in the Astronomy and High Energy Physics domains, and the scale and speed of development in Genomics data generation is well known, but the inability to provide sufficient processing capacity is not one of their more frequent complaints.
That’s not to say that Big Science and its Big Data are free of challenges in data curation; only that they are shared with their lesser cousins, where one might say that the real challenge is less one of size than diversity and complexity.
This brief presentation explores those aspects of data curation that go beyond the challenges of processing power but which may lend a broader perspective to the technology selection process.
The document discusses the Genome in a Bottle Consortium's bioinformatics working group meeting in August 2013. It summarizes the Consortium's data release policy of making sequence data, alignments, variant calls, and analysis software publicly available. The group discussed previous analyses that have been performed on the reference material NA12878, upcoming analyses from various groups, considerations for integrating multiple datasets, organizing data on the NCBI ftp site, and reanalyzing reference materials as new sequence data is submitted.
The document describes the development of a software tool called the TCE Selector that seamlessly integrates radiology teaching databases with PACS clients according to the IHE TCE integration profile. The TCE Selector allows radiologists to select images from PACS, add metadata, and export teaching files to databases like MIRC. Over 700 teaching files were created in the first year with high user acceptance. Further development could focus on more educational content and interdisciplinary collaboration.
The Center for Applied Optimization at the University of Florida conducts interdisciplinary research in optimization involving faculty from various departments. Over the past 5 years, their research has included global optimization, optimization in biomedicine like predicting epileptic seizures, analyzing massive datasets like social networks, developing approximation algorithms, and algorithms for problems like multicast networks. Current projects also involve computational neuroscience, probabilistic classifiers in medicine, research on energy problems, and using Raman spectroscopy for cancer research. The Center collaborates with researchers from other institutions and hosts many visiting scholars each year.
Open science can contribute to AI trustworthiness. This talk is a categorization of scientific data platforms, and a framing of AI trustworthiness with pointers to open science contributions.
CDISC is a non-profit organization that establishes clinical research data standards to support data acquisition, exchange, and submission. It has developed several standards including CDASH, which aims to standardize data collection fields across clinical trials to streamline data analysis and reduce errors. CDASH defines a set of common safety domains and variables that can be collected consistently across studies in a standardized way. This helps analyze data more efficiently, reduces training time for sites, and decreases potential errors from inconsistent data collection.
The document describes ComRAD, an intelligent information and expert system for the healthcare domain. ComRAD provides medical specialists with advice for optimal treatment decisions. It consists of a Universal Knowledge Base (UKB), Operating Data Base (ODB), and CoSMoS managing and design software. The ODB contains patients' electronic health records while the UKB contains medical science information and resources. CoSMoS utilizes algorithms to generate forms, design treatment variants, and enable intrasystem communication between components. ComRAD aims to provide optimal, personalized treatment recommendations based on all relevant scientific and patient-specific factors.
A clinical data repository (CDR) consolidates clinical data from multiple sources to provide a unified view of individual patients. It integrates non-uniform data like lab results, demographics, prescriptions, and reports. Challenges to implementing a CDR include storage capacity, computing power, reliability, and connecting diverse data sources. Effective CDRs require standard data formats, verification of integrated data, and the ability to generate analytical reports from the consolidated data warehouse.
The Genome in a Bottle Consortium is developing well-characterized reference genomes and methods to assess confidence in whole genome variant calls. They have generated data from multiple sequencing technologies for several reference genomes, including NA12878. They are developing integrated variant call sets and evaluating structural variants. The consortium is also working with the Global Alliance for Genomics and Health on benchmarking tools and metrics to evaluate variant caller performance.
These are the slides presented by Darren Price in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also http://www.cam-can.org
This document discusses the challenges and opportunities biology faces with increasing data generation. It outlines four key points:
1) Research approaches for analyzing infinite genomic data streams, such as digital normalization which compresses data while retaining information.
2) The need for usable software and decentralized infrastructure to perform real-time, streaming data analysis.
3) The importance of open science and reproducibility given most researchers cannot replicate their own computational analyses.
4) The lack of data analysis training in biology and efforts at UC Davis to address this through workshops and community building.
ImmPort strategies to enhance discoverability of clinical trial dataBarry Smith
Describes strategies for submission of clinical trial data to the NIAID Immunology Database and Analysis Portal in order to advance discoverability, comparability and analysis
Rebecca Mareck graduated from Drexel University in 2016 with a Master of Science in Biomedical Engineering and a Bachelor of Science in Biomedical Engineering with a minor in Electrical Engineering. She maintained a high GPA of 3.5 and received various academic honors and scholarships. Her experience includes research positions at ALS Hope Foundation and Merck, developing assistive technologies, analyzing biomedical data, and troubleshooting manufacturing issues. She also interned at Herley Industries, designing test equipment and repairing systems. Rebecca is proficient in engineering software and labs skills, and is active in professional organizations.
The document discusses handling corporate data and information management. It covers objectives like data resource management, DBMS, data warehousing, and data mining. It describes the sources and types of information an organization uses, both formal sources like internal records and external reports, as well as informal sources like conversations. It also discusses database management systems, data models, data warehousing, and data mining - how organizations use these approaches to collect, process, analyze and extract useful information from their data.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
- Deep brain stimulation (DBS) of the subthalamic nucleus (STN) in Parkinson's disease patients affects both motor and non-motor functions through interactions with motor, associative, and limbic networks in the basal ganglia.
- Connectomics analysis using fiber tracking from DBS electrode locations can help explain individual variability in behavioral effects of STN-DBS across different cognitive tasks. Specifically, it shows that stimulation of fibers connecting regions like the pre-SMA can predict detriments in stopping behavior.
- Stimulation of prefrontal fibers bypassing the STN that connect to brainstem regions has been linked to worsening of depressive symptoms after surgery through connectomics analysis.
This document discusses linear deformations and basic volumetric imaging concepts relevant to Lead-DBS workflows. It describes how linear (affine) transformations can be used to register different images of the same subject by preserving properties like parallel lines and ratios between points. These transformations map between voxel and world coordinate systems using transformation matrices stored in image headers. Rigid and affine transformations involving translation, rotation, scaling, and shearing are presented for realigning one image to another. Hands-on examples for importing and co-registering images are also mentioned.
This document discusses connectomic deep brain stimulation and summarizes recent findings. It describes a tract commonly seen in STN-DBS and ALIC-DBS that traverses within the anterior limb of the internal capsule (ALIC). While this tract was previously referred to as the medial forebrain bundle (MFB), the document argues it is not the MFB but is similar to the "sl-MFB." Connectivity to medial and lateral prefrontal cortices and a potential hyperdirect pathway from the dorsal anterior cingulate cortex are discussed as being important. The anterior thalamic radiation (ATR) is also potentially linked.
This document summarizes several online tools for neuroimaging and scientific communication. It describes tools for visualizing large datasets like MicroDraw and Neuroglancer. It also outlines tools for collaboration such as Brain Box and Open Neuro Lab. Additionally, it discusses tools for publishing and organizing research like OSF, BioRxiv, Papers, and Figshare. Finally, the document presents tools for communication and project management, including ResearchGate, Github, Slack, and Trello.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
Compositions of iron-meteorite parent bodies constrainthe structure of the pr...Sérgio Sacani
Magmatic iron-meteorite parent bodies are the earliest planetesimals in the Solar System,and they preserve information about conditions and planet-forming processes in thesolar nebula. In this study, we include comprehensive elemental compositions andfractional-crystallization modeling for iron meteorites from the cores of five differenti-ated asteroids from the inner Solar System. Together with previous results of metalliccores from the outer Solar System, we conclude that asteroidal cores from the outerSolar System have smaller sizes, elevated siderophile-element abundances, and simplercrystallization processes than those from the inner Solar System. These differences arerelated to the formation locations of the parent asteroids because the solar protoplane-tary disk varied in redox conditions, elemental distributions, and dynamics at differentheliocentric distances. Using highly siderophile-element data from iron meteorites, wereconstruct the distribution of calcium-aluminum-rich inclusions (CAIs) across theprotoplanetary disk within the first million years of Solar-System history. CAIs, the firstsolids to condense in the Solar System, formed close to the Sun. They were, however,concentrated within the outer disk and depleted within the inner disk. Future modelsof the structure and evolution of the protoplanetary disk should account for this dis-tribution pattern of CAIs.
BIRDS DIVERSITY OF SOOTEA BISWANATH ASSAM.ppt.pptxgoluk9330
Ahota Beel, nestled in Sootea Biswanath Assam , is celebrated for its extraordinary diversity of bird species. This wetland sanctuary supports a myriad of avian residents and migrants alike. Visitors can admire the elegant flights of migratory species such as the Northern Pintail and Eurasian Wigeon, alongside resident birds including the Asian Openbill and Pheasant-tailed Jacana. With its tranquil scenery and varied habitats, Ahota Beel offers a perfect haven for birdwatchers to appreciate and study the vibrant birdlife that thrives in this natural refuge.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
3. Why normative connectomes?
• After DBS surgery, patients cannot be scanned in the MRI without
restrictions, acquiring high quality data is difficult if not impossible
• Normative connectomes: “average wiring diagrams” of the
human brain
• Based on diffusion-weighted or resting-state functional MRI
• Advantages: Recorded on specialized MRI hardware, high signal-
to-noise ratio, large samples
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
8. Connectivity from seeds or connectivity matrices
Select the connectome
(and resolution)
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
9. Connectivity map from seed
• Connectivity seeding from
VTAs of each patient
• Based on the normative
connectome data for
healthy subjects or e.g. PD
patients
• Calculation may take a
while
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
10. Example: calculate functional connectivity
profile based on PD connectome
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
11. • Files generated:
Example: calculate
functional connectivity
profile based on PD
connectome
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
13. Other options
• Select BER001-BER003
• Generate a connectivity
matrix for the VTA contacts
and the other voxels in the
connectivity map
• Export time series of
patients within all the
connected voxels
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
18. fMRI-based connectivity from BER001, BER002, BER003 to the
depression network, could be correlated e.g. with depression
score
friederike.irmen@charite.deLead-DBS Workshop
Normative connectomic analyses in Lead-DBS
Editor's Notes
In their work, Li et al. compare two different patient cohorts undergoing DBS for treatment resistant obsessive compulsive disorder (OCD) from two study centers, Grenoble (n = 14) and Cologne (n = 22) with DBS in two different brain targets, the anteromedial subthalamic nucleus (amSTN) in Grenoble and the anterior limb of the internal capsule = ALIC in Cologne