A Lightening Talk for the First Big Data Chalk and Talk of the 2015 Fall Term at Georgia Tech. This presentation glances over, with hyperlinks, to my favorite Github features for uneducated git users that reside in the research science space.
Towards editorial transparency in computational journalismJennifer Stark
This goes together with a research paper also uploaded here describing practical steps to transparency in computational journalism with two case studies.
Paraimpu: a social tool for the Web of ThingsAntonio Pintus
Paraimpu is a social tool for the Web of Things.
Paraimpu is a social tool for the Web of Things.
Connect, use, share and compose things, services and devices to create personalized brand new applications!
The Web of Things is more than Things in the Web
A Powerpoint presentation about using Web 2.0 tools and apps to facilitate research projects in elementary school. Includes tips about a new way to do bibliographies using Google apps and QR codes.
We're all suffering from the same condition: information overload and filter failure. Yet some people seem to manage the torrent of information more efficiently and effortlessly than others. What's their secret? We'll take a tour of some of the tools available to manage the mass of science-related content -- from RSS to reference managers, and from collaboration docs to social aggregation. We'll also reveal the daily reading habits of some of the best-known purveyors of science content, and come armed with your own tips for battling info overload too.
From the ScienceOnline2012 session, "Drowning in Information! How Can We Create Organization & Balance -- Tools and Strategies for Managing Information Overload (Science and Otherwise)"
Towards editorial transparency in computational journalismJennifer Stark
This goes together with a research paper also uploaded here describing practical steps to transparency in computational journalism with two case studies.
Paraimpu: a social tool for the Web of ThingsAntonio Pintus
Paraimpu is a social tool for the Web of Things.
Paraimpu is a social tool for the Web of Things.
Connect, use, share and compose things, services and devices to create personalized brand new applications!
The Web of Things is more than Things in the Web
A Powerpoint presentation about using Web 2.0 tools and apps to facilitate research projects in elementary school. Includes tips about a new way to do bibliographies using Google apps and QR codes.
We're all suffering from the same condition: information overload and filter failure. Yet some people seem to manage the torrent of information more efficiently and effortlessly than others. What's their secret? We'll take a tour of some of the tools available to manage the mass of science-related content -- from RSS to reference managers, and from collaboration docs to social aggregation. We'll also reveal the daily reading habits of some of the best-known purveyors of science content, and come armed with your own tips for battling info overload too.
From the ScienceOnline2012 session, "Drowning in Information! How Can We Create Organization & Balance -- Tools and Strategies for Managing Information Overload (Science and Otherwise)"
Information sciences to fuel the data age of materials scienceTony Fast
A presentation given at Novelis R&D in Kennesaw,Ga on Wednesday August 28 2013. The presentation was organized by Babak Raeisinia. The presentation provides a scope of what emerging information science, data science, and microstructure informatics techniques can used to drive the Materials Genome Initiative.
Higher-Order Microstructure Statistics for Next Generation Materials TaxonomyTony Fast
Presentation given at 49th Annual Technical Meeting for the Society of Engineering Sciences in the Materials - Processing, Microstructure, Performance Relations Symposia on October 12, 2012 at Georgia Tech in Atlanta, GA.
Spatially resolved pair correlation functions for structure processing taxono...Tony Fast
Presentation given at the Integrated Computational Materials Engineering conference 2013. This presentation provides a brief survey of what spatial correlation functions can provide for point cloud microstructure datasets. This method is applicable to very large (~1,000,000 datapoints) both experimental and computational microstructure datasets. It is applied to Aluminum molecular dynamics simulations provided by Chandler Becker at NIST, molecular dynamics simulations of mechanical deformation of polymer materials provided by Karl Jacobs and Xin Dong at Georgia Tech, and lastly experimental datasets of the solidfication of Al-Cu alloys generated from X-ray Computed Tomography as provided by Peter Voorhees and John Gibbs at Northwestern University.
Spatially resolved pair correlation functions for point cloud dataTony Fast
Presentation on computing spatial correlation functions for point cloud materials science information. This presentation uses tree algorithms and Fourier methods to compute the statistics. The analysis is performed on Al-Cu interface information provided by John Gibbs and Peter Voorhees at Northwestern University as funded by the Mosaic of Microstructure MURI program.
A presentation for the PyData NY meetup on May 25th.
The Internet Killed the Lab Notebook explores the concurrent histories of human innovation alongside some beautiful artifacts of scientific discovery.
Novel and Enhanced Structure-Property-Processing Relationships with Microstru...Tony Fast
A presentation given at Northwestern University for the Predictive Science and Engineering Cluster on November 8, 2012 and at Purdue University in the Electrical Engineering Department on November 9, 2012.
Higher-Order Localization Relationships Using the MKS Approach Tony Fast
Presentation on a parallelizable, effective models technique to replace costly simulation techniques (e.g. Finite Element Models or Phase Field Simulation). This presentation was given at the ASME 2011 Applied Mechanics and Materials Conference In Chicago, IL.
Data Science Solutions by Materials Scientists: The Early Case StudiesTony Fast
Improvements in algorithms, technology, and computation are directly impacting the landscape of information use in materials science. The 3 V’s of Big Data (volume, velocity, and variety) are becoming evermore apparent within all sectors of the field. Novel approaches will be required to confront the emerging data deluge and extract the richest knowledge from simulated and empirical information in complex evolving 3-D spaces. Microstructure Informatics (μInformatics) is an emerging suite of signal processing techniques, advanced statistical tools, and data science methods tailored specifically for this new frontier. μInformatics curates and transforms large collections of materials science information using efficient workflows to extract knowledge of bi-directional structure-property/processing connections for most material classes.
In this talk, a few early case studies in data-driven methods to solve materials science problems will be explored. Emerging spatial statistics tools will be explored that enable an objective comparison of static and evolving 3-D material volumes from molecular dynamics simulation, micro-CT, and Scanning Electron Microscopy. Also, the statistics will provide a foundation to create improved bottom-up homogenization relationships in fuel cell materials. Lastly, applications of the Materials Knowledge System, a data-driven meta-model to create top-down localization relationships will be explored for phase field model and finite element model information.
A talk for the Institute of Data Analytics and High Performance Computing Chalk and Talk lunch series on Thursday April 25, 2014.
This high level talk discusses materials science on the grounds of the information that drive new discoveries in materials science. Understanding the nature of the data that encompasses the landscape of materials science is important for the next generation workforce and the emerging discipline of Materials Data Scientist
The Digital Academic: Social and Other Digital Media for AcademicsDeborah Lupton
A presentation used in workshops to teach academics about how to use social media and other digital media for professional purposes. Includes discussion of Academia.edu, LinkedIn, blogs, Twitter, Facebook, institutional e-repositories, Storify, SlideShare, Pinterest and more.
Information sciences to fuel the data age of materials scienceTony Fast
A presentation given at Novelis R&D in Kennesaw,Ga on Wednesday August 28 2013. The presentation was organized by Babak Raeisinia. The presentation provides a scope of what emerging information science, data science, and microstructure informatics techniques can used to drive the Materials Genome Initiative.
Higher-Order Microstructure Statistics for Next Generation Materials TaxonomyTony Fast
Presentation given at 49th Annual Technical Meeting for the Society of Engineering Sciences in the Materials - Processing, Microstructure, Performance Relations Symposia on October 12, 2012 at Georgia Tech in Atlanta, GA.
Spatially resolved pair correlation functions for structure processing taxono...Tony Fast
Presentation given at the Integrated Computational Materials Engineering conference 2013. This presentation provides a brief survey of what spatial correlation functions can provide for point cloud microstructure datasets. This method is applicable to very large (~1,000,000 datapoints) both experimental and computational microstructure datasets. It is applied to Aluminum molecular dynamics simulations provided by Chandler Becker at NIST, molecular dynamics simulations of mechanical deformation of polymer materials provided by Karl Jacobs and Xin Dong at Georgia Tech, and lastly experimental datasets of the solidfication of Al-Cu alloys generated from X-ray Computed Tomography as provided by Peter Voorhees and John Gibbs at Northwestern University.
Spatially resolved pair correlation functions for point cloud dataTony Fast
Presentation on computing spatial correlation functions for point cloud materials science information. This presentation uses tree algorithms and Fourier methods to compute the statistics. The analysis is performed on Al-Cu interface information provided by John Gibbs and Peter Voorhees at Northwestern University as funded by the Mosaic of Microstructure MURI program.
A presentation for the PyData NY meetup on May 25th.
The Internet Killed the Lab Notebook explores the concurrent histories of human innovation alongside some beautiful artifacts of scientific discovery.
Novel and Enhanced Structure-Property-Processing Relationships with Microstru...Tony Fast
A presentation given at Northwestern University for the Predictive Science and Engineering Cluster on November 8, 2012 and at Purdue University in the Electrical Engineering Department on November 9, 2012.
Higher-Order Localization Relationships Using the MKS Approach Tony Fast
Presentation on a parallelizable, effective models technique to replace costly simulation techniques (e.g. Finite Element Models or Phase Field Simulation). This presentation was given at the ASME 2011 Applied Mechanics and Materials Conference In Chicago, IL.
Data Science Solutions by Materials Scientists: The Early Case StudiesTony Fast
Improvements in algorithms, technology, and computation are directly impacting the landscape of information use in materials science. The 3 V’s of Big Data (volume, velocity, and variety) are becoming evermore apparent within all sectors of the field. Novel approaches will be required to confront the emerging data deluge and extract the richest knowledge from simulated and empirical information in complex evolving 3-D spaces. Microstructure Informatics (μInformatics) is an emerging suite of signal processing techniques, advanced statistical tools, and data science methods tailored specifically for this new frontier. μInformatics curates and transforms large collections of materials science information using efficient workflows to extract knowledge of bi-directional structure-property/processing connections for most material classes.
In this talk, a few early case studies in data-driven methods to solve materials science problems will be explored. Emerging spatial statistics tools will be explored that enable an objective comparison of static and evolving 3-D material volumes from molecular dynamics simulation, micro-CT, and Scanning Electron Microscopy. Also, the statistics will provide a foundation to create improved bottom-up homogenization relationships in fuel cell materials. Lastly, applications of the Materials Knowledge System, a data-driven meta-model to create top-down localization relationships will be explored for phase field model and finite element model information.
A talk for the Institute of Data Analytics and High Performance Computing Chalk and Talk lunch series on Thursday April 25, 2014.
This high level talk discusses materials science on the grounds of the information that drive new discoveries in materials science. Understanding the nature of the data that encompasses the landscape of materials science is important for the next generation workforce and the emerging discipline of Materials Data Scientist
The Digital Academic: Social and Other Digital Media for AcademicsDeborah Lupton
A presentation used in workshops to teach academics about how to use social media and other digital media for professional purposes. Includes discussion of Academia.edu, LinkedIn, blogs, Twitter, Facebook, institutional e-repositories, Storify, SlideShare, Pinterest and more.
Feb.2016 Demystifying Digital Humanities - Workshop 3Paige Morgan
Slides from Demystifying Digital Humanities Workshop 3: Data Wrangling: Programming on the Whiteboard -- taught at the University of Miami Libraries in February, 2016
2019-04-17 Bio-IT World G Suite-Jira Cloud Sample TrackingBruce Kozuma
Current off-the-shelf technology allows for development of a low-cost, serverless sample tracking solution, using commonly used components (G Suite and Jira Cloud). Combined with Agile principles (e.g., minimum viable product, short cycle and iterative delivery) has resulted in a solution that is helping reduce cost of research at the Broad Institute of MIT and Harvard.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The Evolution of Science Education PraxiLabs’ Vision- Presentation (2).pdfmediapraxi
The rise of virtual labs has been a key tool in universities and schools, enhancing active learning and student engagement.
💥 Let’s dive into the future of science and shed light on PraxiLabs’ crucial role in transforming this field!
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
5. pages
You get one site per GitHub account and organization,
and unlimited project sites. Ready? Let’s get started.
My Pages
6. issues
• Issues raise project questions
• Categorize Questions
• Written in Plain Text Markdown
• Vertical Integration with
colleagues by linking email
• Social Media for Research
Science Progress
7.
8. • GITHUB
– Extremely extensible (heroku, Authorea, Plot.ly)
– Tech Based Social Network
– Easier to understand than git
– Github teaches with accessible documentation!
• PAGES
– Easy-to-use, On-the-fly web hosting
– Fine Granular Results for Research Science
– Add Disqus for interactivity
– Interface with external services through API’s
• ISSUES
– Document research tasks, questions, and hypotheses
• PRIVACY
– Github defaults to transparency, but respects privacy.
?