This document provides an overview and introduction to the concepts and challenges of e-research. It begins by examining competing terms used to describe the transformation in research due to widespread digital technologies and networks. Key terms discussed include e-science, cyberinfrastructure, and e-research. The document then outlines the conceptual framework of the book, which is divided into sections on conceptualization, development, collaboration, visualization, data preservation and reuse, access and intellectual property, and case studies. Each chapter is briefly introduced. The concluding section notes areas for further research around chronicling transformations in scholarship and contextualizing changes within disciplinary cultures.
Jonathan Tedds Distinguished Lecture at DLab, UC Berkeley, 12 Sep 2013: "The ...Jonathan Tedds
This document discusses open access to research data and peer review of data publications. It notes that as a first step, data underpinning journal articles should be made concurrently available in accessible databases. The Royal Society report in 2012 advocated for all science literature and data to be online and interoperable. Key issues in linking data to the scientific record are data persistence, quality, attribution, and credit. The document provides examples from astronomy of data reuse leading to new publications and cites a study finding poor reproducibility of ecological data sets over time as data availability declines. It outlines different levels of research data from raw to processed to published and discusses initiatives for open data publication and peer review.
- The document discusses collaboration in science, defining collaboratories as organizational entities that span distance and support rich interaction around a common research area through shared tools and data.
- Classic examples of collaboratories described include those for upper atmospheric research, space physics, earthquake engineering simulation, and AIDS research.
- Research has found key factors for successful collaboration include alignment of goals, establishment of trust, appropriate division of labor, and effective technology and support infrastructure. Future collaborative scholarship may spread to more fields and be aided by advances in technology.
Moving From Small Science To Big Scienceguest2426e1d
The document discusses two case studies of scientific research projects - one tracking marine mammals over 40 years, and the other studying genetic factors in bipolar disorder over 20 years. Both projects grew significantly in size and scope over time. This led to challenges in organizing and managing the large amounts of data collected in a way that was compatible, standardized, and accessible to collaborators. The researchers received training in conducting scientific tasks but not in systematically organizing information on a large scale. The document examines issues that arise when small projects expand and ways to help scientists address data management challenges as projects increase in scale and collaboration.
The document discusses two case studies of scientific research projects - one tracking marine mammals over 40 years, and the other studying genetic factors in bipolar disorder over 20 years. Both projects grew significantly in size and scope over time. This led to challenges in organizing and managing the large amounts of data collected in a way that was compatible, standardized, and accessible to collaborators. The researchers received training in conducting scientific tasks but not in systematically organizing information on a large scale. The document examines issues that arise when small projects expand and ways to help scientists address data management challenges as projects increase in scale and collaboration.
Scott Edmunds talk at G3 (Great GigaScience & Galaxy) workshop: Open Data: th...GigaScience, BGI Hong Kong
Scott Edmunds talk at G3 (Great GigaScience & Galaxy) workshop: Open Data: the reproducibility crisis, and the need for transparency. Melbourne University 19th September 2014
The document summarizes the Chemist's Toolkit for publishing and promoting work online. It discusses open access publishing models, federal funding reporting mandates, retaining rights through author addenda, copyright and creative commons licensing. The toolkit contents are changing as publishing models evolve with new technologies, and it's important to maintain the toolkit by staying aware of developments. Globalization is increasing international collaborations which impacts cultural expectations around publishing.
This document discusses the need for open science due to a reproducibility crisis in many scientific disciplines. It notes that many published findings cannot be replicated and estimates that at least two-thirds of published results in psychology and biomedicine may be incorrect. This represents a credibility crisis that undermines public trust in science. The document argues that adopting practices of open science such as preregistration, open data, and detailed documentation can help address this crisis by reducing biases, enabling replication, and increasing transparency and reproducibility. Open science is presented as a means of improving research quality and accelerating discovery for the benefit of both science and society.
This document provides an overview and introduction to the concepts and challenges of e-research. It begins by examining competing terms used to describe the transformation in research due to widespread digital technologies and networks. Key terms discussed include e-science, cyberinfrastructure, and e-research. The document then outlines the conceptual framework of the book, which is divided into sections on conceptualization, development, collaboration, visualization, data preservation and reuse, access and intellectual property, and case studies. Each chapter is briefly introduced. The concluding section notes areas for further research around chronicling transformations in scholarship and contextualizing changes within disciplinary cultures.
Jonathan Tedds Distinguished Lecture at DLab, UC Berkeley, 12 Sep 2013: "The ...Jonathan Tedds
This document discusses open access to research data and peer review of data publications. It notes that as a first step, data underpinning journal articles should be made concurrently available in accessible databases. The Royal Society report in 2012 advocated for all science literature and data to be online and interoperable. Key issues in linking data to the scientific record are data persistence, quality, attribution, and credit. The document provides examples from astronomy of data reuse leading to new publications and cites a study finding poor reproducibility of ecological data sets over time as data availability declines. It outlines different levels of research data from raw to processed to published and discusses initiatives for open data publication and peer review.
- The document discusses collaboration in science, defining collaboratories as organizational entities that span distance and support rich interaction around a common research area through shared tools and data.
- Classic examples of collaboratories described include those for upper atmospheric research, space physics, earthquake engineering simulation, and AIDS research.
- Research has found key factors for successful collaboration include alignment of goals, establishment of trust, appropriate division of labor, and effective technology and support infrastructure. Future collaborative scholarship may spread to more fields and be aided by advances in technology.
Moving From Small Science To Big Scienceguest2426e1d
The document discusses two case studies of scientific research projects - one tracking marine mammals over 40 years, and the other studying genetic factors in bipolar disorder over 20 years. Both projects grew significantly in size and scope over time. This led to challenges in organizing and managing the large amounts of data collected in a way that was compatible, standardized, and accessible to collaborators. The researchers received training in conducting scientific tasks but not in systematically organizing information on a large scale. The document examines issues that arise when small projects expand and ways to help scientists address data management challenges as projects increase in scale and collaboration.
The document discusses two case studies of scientific research projects - one tracking marine mammals over 40 years, and the other studying genetic factors in bipolar disorder over 20 years. Both projects grew significantly in size and scope over time. This led to challenges in organizing and managing the large amounts of data collected in a way that was compatible, standardized, and accessible to collaborators. The researchers received training in conducting scientific tasks but not in systematically organizing information on a large scale. The document examines issues that arise when small projects expand and ways to help scientists address data management challenges as projects increase in scale and collaboration.
Scott Edmunds talk at G3 (Great GigaScience & Galaxy) workshop: Open Data: th...GigaScience, BGI Hong Kong
Scott Edmunds talk at G3 (Great GigaScience & Galaxy) workshop: Open Data: the reproducibility crisis, and the need for transparency. Melbourne University 19th September 2014
The document summarizes the Chemist's Toolkit for publishing and promoting work online. It discusses open access publishing models, federal funding reporting mandates, retaining rights through author addenda, copyright and creative commons licensing. The toolkit contents are changing as publishing models evolve with new technologies, and it's important to maintain the toolkit by staying aware of developments. Globalization is increasing international collaborations which impacts cultural expectations around publishing.
This document discusses the need for open science due to a reproducibility crisis in many scientific disciplines. It notes that many published findings cannot be replicated and estimates that at least two-thirds of published results in psychology and biomedicine may be incorrect. This represents a credibility crisis that undermines public trust in science. The document argues that adopting practices of open science such as preregistration, open data, and detailed documentation can help address this crisis by reducing biases, enabling replication, and increasing transparency and reproducibility. Open science is presented as a means of improving research quality and accelerating discovery for the benefit of both science and society.
Scott Edmunds: Channeling the Deluge: Reproducibility & Data Dissemination in...GigaScience, BGI Hong Kong
Scott Edmunds talk at the 7th Internation Conference on Genomics: "Channeling the Deluge: Reproducibility & Data Dissemination in the “Big-Data” Era. ICG7, Hong Kong 1st December 2012
"
5 steps to using open access in the classroom 11 9 2011 Elizabeth Brown
The document discusses open educational resources and open content. It begins by outlining limitations to open content and then provides a five step process for creating open content: 1) identify open content, 2) assess the value of information, 3) create open content, 4) share open content with peers, and 5) preserve open content. It then discusses various tools and platforms for creating, sharing, and preserving open content. The document concludes by emphasizing that creating open content is an iterative process and provides additional advice.
This document summarizes a presentation about scientific publishing and open access. It discusses some of the challenges researchers and publishers currently face, such as long publication times and high journal costs. It proposes that a new model is needed that reduces workload, equitably shares resources, and incentivizes open sharing over "publish or perish". The presenter advocates building a technological infrastructure to support researchers and democratize access to academic content, so that scientific dissemination looks very different if developed today. The goal is to empower researchers and universities through serving the public interest.
The document discusses the concept of e-research as an intervention in existing research practices. It proposes using the term "e-research" rather than "e-science" to be more inclusive of different research modes. E-research is conceptualized as a specific, historically situated set of interventions. The Virtual Knowledge Studio is presented as an example of e-research, aiming to integrate design and analysis while maintaining a critical stance. Managing expectations is identified as important for shaping e-research and assessing its outcomes.
Slides describing Force11 Work and background of several of the speakers, used for talks to University of Lethbridge, Carnegie Mellon and to Elsevier internally
OII Summer Doctoral Programme 2010: Global brain by Meyer & SchroederEric Meyer
The document discusses how technology is driving research to become more collaborative globally through distributed and networked tools. It examines several case studies where technologies enabled large-scale collaborative research projects that addressed questions too big for individual labs. These include distributed computing for particle physics, genomic studies, and proteomics. Challenges discussed include interoperability, data sharing policies, and sustaining momentum in infrastructure.
Sci 2011 big_data(30_may13)2nd revised _ loetHan Woo PARK
This document summarizes a research paper that analyzes social and semantic networks related to big data research. It describes how the authors collected data on internationally co-authored papers from the 2011 SCI database using search terms related to big data. It then summarizes the two research questions addressed: 1) What is the structural pattern of international co-authorship networks in big data research? 2) What is the semantic structure of paper titles in this field? The authors analyzed the data using social network analysis and semantic network methods to address these questions and better understand patterns of collaboration and terminology use in emerging big data science.
1. Gigascience Journal is a new open access journal and database focused on publishing and hosting large-scale genomic and other "big data" sets to promote sharing, reproducibility, and reuse.
2. The journal aims to address incentives for data sharing by providing data producers credit through DOIs for datasets and enabling attribution and impact tracking when data is cited.
3. As an example, genomic data from the 2011 E. coli outbreak in Germany was rapidly shared on the journal's website under an open license and assigned a DOI to allow analysis and citation by researchers worldwide working to understand the epidemic.
2014 CrossRef Annual Meeting Keynote: Ways and Needs to Promote Rapid Data Sh...Crossref
Keynote address: "Ways and Needs to Promote Rapid Data Sharing" by Laurie Goodman of GigaScience.
Data is the base upon which all scientific discoveries are built, and data availability speeds the rate at which discoveries are made. Given that the overall goal for research is to improve human health and our environment, waiting to release data until after the first publication (sometimes taking years) is unacceptable. There are myriad issues that impede researchers from openly, and most importantly, rapidly sharing data, including lack of incentives: no credit, limited funding benefits, and little impact on career advancement; and cultural issues: the fear of being scooped. However, scientific publishers —the communicators of science and a key mechanism by which a researcher’s productivity is measured— can, and should, play a central role in promoting data sharing. Data citation and publication are just some of the ways we can support and encourage researchers who share data. Here, I will provide examples to help make clear the need for publishers to play an active role in this process and provide potential ways to facilitate our ability to promote open and rapid data sharing. This is not easy; but it is essential.
This document summarizes a presentation by Nicole Nogoy from GigaScience about their journal, data platform, and database for large-scale data. GigaScience aims to enable more open access, collaboration and data sharing across disciplines by deconstructing research papers and providing credit for data, software and other digital outputs. It utilizes a big data infrastructure to integrate open access publishing with data and software publishing platforms. Examples are provided of data sets and analyses that have been published through GigaScience to maximize reuse and reproducibility.
Slides from Monday 30 July - Data in the Scholarly Communications Life Cycle Course which is part of the FORCE11 Scholarly Communications Institute.
Presenter - Natasha Simons
The document discusses ethics in research and provides several examples of ethical issues and misconduct:
- It defines ethics and research, and explains why adhering to ethical norms in research is important. It discusses common ethical principles and codes/policies for research ethics.
- It presents four cases involving ethical dilemmas in research and discusses appropriate responses. It also provides examples of deviations from acceptable research practices, such as fabricating or falsifying data.
- Specifically, it discusses high-profile cases of scientific misconduct like Dr. John Darsee who fabricated research data and Dr. Robert Slutsky who included non-contributing authors. It emphasizes that the most serious crimes in research are fabrication, falsification, and plagiar
PLOS Biology is launching a new section focused on meta-research to increase transparency in biosciences research. Meta-research examines issues related to research design, methods, reporting, evaluation and rewards. This will include exploring sources of bias, data sharing standards, and assessment metrics. Registered Reports will also be introduced, which accept studies for publication based on proposed methods rather than results, reducing bias against negative findings. However, most research data is lost within 10-15 years, highlighting the need for improved data sharing policies to maximize the value of research findings.
Presentation by RIN's Director, Michael Jubb, at the Association of Subscription Agents' annual conference in February 2010. http://www.subscription-agents.org/conferences/asa-conference-2010
The document discusses finding biological information through scientific literature and databases. It describes the scientific method and how it relates to information retrieval. It then covers various formats of scientific literature like journals, conference proceedings, magazines, books, and encyclopedias. The document also discusses major biological databases and how to perform effective searches using Boolean logic.
Decomposing Social and Semantic Networks in Emerging “Big Data” ResearchHan Woo PARK
빅데이터가 학문으로 등장한 배경을 잘 정리한 논문
http://www.sciencedirect.com/science/article/pii/S1751157713000473
Park, H.W.@, & Leydesdorff, L. (2013). Decomposing Social and Semantic Networks in Emerging “Big Data” Research. Journal of Informetrics. 7 (3), 756-765. DOI information: 10.1016/j.joi.2013.05.004
The Evolution of e-Research: Machines, Methods and MusicDavid De Roure
The document summarizes the evolution of e-research over three generations from 1981 to the present. The first generation saw early adopters using tools within their disciplines with some reuse. The second generation was characterized by increased reuse of tools, data and methods across areas. The third generation is defined by radical sharing of resources globally across any discipline through social networks and reusable research objects. The document also discusses several specific projects and tools that exemplify each generation of e-research including myExperiment, Galaxy, and SALAMI.
E learning a versatile tool for knowledge managementKishor Satpathy
E-learning is a versatile tool for knowledge management that allows people to learn anytime, anywhere using technology. It has evolved from computer-based training and web-based training to incorporate more collaborative learning approaches. E-learning supports knowledge management by creating a growing repository of knowledge and facilitating its sharing and reuse across organizations. When combined with metadata and semantic tagging, e-learning becomes an effective tool to organize knowledge and support collaboration.
Este documento presenta una introducción al libro "Sociedad del Conocimiento y Educación". Se argumenta que la información, comunicación, educación y conocimiento son esenciales para el progreso de las sociedades. Las tecnologías de la información y comunicación (TIC) tienen un gran impacto en nuestras vidas al superar obstáculos como el tiempo y la distancia. El libro explora cómo estas fuerzas están transformando la educación y la necesidad de adaptar los sistemas educativos a los cambios de la Sociedad del Conocimiento.
Scott Edmunds: Channeling the Deluge: Reproducibility & Data Dissemination in...GigaScience, BGI Hong Kong
Scott Edmunds talk at the 7th Internation Conference on Genomics: "Channeling the Deluge: Reproducibility & Data Dissemination in the “Big-Data” Era. ICG7, Hong Kong 1st December 2012
"
5 steps to using open access in the classroom 11 9 2011 Elizabeth Brown
The document discusses open educational resources and open content. It begins by outlining limitations to open content and then provides a five step process for creating open content: 1) identify open content, 2) assess the value of information, 3) create open content, 4) share open content with peers, and 5) preserve open content. It then discusses various tools and platforms for creating, sharing, and preserving open content. The document concludes by emphasizing that creating open content is an iterative process and provides additional advice.
This document summarizes a presentation about scientific publishing and open access. It discusses some of the challenges researchers and publishers currently face, such as long publication times and high journal costs. It proposes that a new model is needed that reduces workload, equitably shares resources, and incentivizes open sharing over "publish or perish". The presenter advocates building a technological infrastructure to support researchers and democratize access to academic content, so that scientific dissemination looks very different if developed today. The goal is to empower researchers and universities through serving the public interest.
The document discusses the concept of e-research as an intervention in existing research practices. It proposes using the term "e-research" rather than "e-science" to be more inclusive of different research modes. E-research is conceptualized as a specific, historically situated set of interventions. The Virtual Knowledge Studio is presented as an example of e-research, aiming to integrate design and analysis while maintaining a critical stance. Managing expectations is identified as important for shaping e-research and assessing its outcomes.
Slides describing Force11 Work and background of several of the speakers, used for talks to University of Lethbridge, Carnegie Mellon and to Elsevier internally
OII Summer Doctoral Programme 2010: Global brain by Meyer & SchroederEric Meyer
The document discusses how technology is driving research to become more collaborative globally through distributed and networked tools. It examines several case studies where technologies enabled large-scale collaborative research projects that addressed questions too big for individual labs. These include distributed computing for particle physics, genomic studies, and proteomics. Challenges discussed include interoperability, data sharing policies, and sustaining momentum in infrastructure.
Sci 2011 big_data(30_may13)2nd revised _ loetHan Woo PARK
This document summarizes a research paper that analyzes social and semantic networks related to big data research. It describes how the authors collected data on internationally co-authored papers from the 2011 SCI database using search terms related to big data. It then summarizes the two research questions addressed: 1) What is the structural pattern of international co-authorship networks in big data research? 2) What is the semantic structure of paper titles in this field? The authors analyzed the data using social network analysis and semantic network methods to address these questions and better understand patterns of collaboration and terminology use in emerging big data science.
1. Gigascience Journal is a new open access journal and database focused on publishing and hosting large-scale genomic and other "big data" sets to promote sharing, reproducibility, and reuse.
2. The journal aims to address incentives for data sharing by providing data producers credit through DOIs for datasets and enabling attribution and impact tracking when data is cited.
3. As an example, genomic data from the 2011 E. coli outbreak in Germany was rapidly shared on the journal's website under an open license and assigned a DOI to allow analysis and citation by researchers worldwide working to understand the epidemic.
2014 CrossRef Annual Meeting Keynote: Ways and Needs to Promote Rapid Data Sh...Crossref
Keynote address: "Ways and Needs to Promote Rapid Data Sharing" by Laurie Goodman of GigaScience.
Data is the base upon which all scientific discoveries are built, and data availability speeds the rate at which discoveries are made. Given that the overall goal for research is to improve human health and our environment, waiting to release data until after the first publication (sometimes taking years) is unacceptable. There are myriad issues that impede researchers from openly, and most importantly, rapidly sharing data, including lack of incentives: no credit, limited funding benefits, and little impact on career advancement; and cultural issues: the fear of being scooped. However, scientific publishers —the communicators of science and a key mechanism by which a researcher’s productivity is measured— can, and should, play a central role in promoting data sharing. Data citation and publication are just some of the ways we can support and encourage researchers who share data. Here, I will provide examples to help make clear the need for publishers to play an active role in this process and provide potential ways to facilitate our ability to promote open and rapid data sharing. This is not easy; but it is essential.
This document summarizes a presentation by Nicole Nogoy from GigaScience about their journal, data platform, and database for large-scale data. GigaScience aims to enable more open access, collaboration and data sharing across disciplines by deconstructing research papers and providing credit for data, software and other digital outputs. It utilizes a big data infrastructure to integrate open access publishing with data and software publishing platforms. Examples are provided of data sets and analyses that have been published through GigaScience to maximize reuse and reproducibility.
Slides from Monday 30 July - Data in the Scholarly Communications Life Cycle Course which is part of the FORCE11 Scholarly Communications Institute.
Presenter - Natasha Simons
The document discusses ethics in research and provides several examples of ethical issues and misconduct:
- It defines ethics and research, and explains why adhering to ethical norms in research is important. It discusses common ethical principles and codes/policies for research ethics.
- It presents four cases involving ethical dilemmas in research and discusses appropriate responses. It also provides examples of deviations from acceptable research practices, such as fabricating or falsifying data.
- Specifically, it discusses high-profile cases of scientific misconduct like Dr. John Darsee who fabricated research data and Dr. Robert Slutsky who included non-contributing authors. It emphasizes that the most serious crimes in research are fabrication, falsification, and plagiar
PLOS Biology is launching a new section focused on meta-research to increase transparency in biosciences research. Meta-research examines issues related to research design, methods, reporting, evaluation and rewards. This will include exploring sources of bias, data sharing standards, and assessment metrics. Registered Reports will also be introduced, which accept studies for publication based on proposed methods rather than results, reducing bias against negative findings. However, most research data is lost within 10-15 years, highlighting the need for improved data sharing policies to maximize the value of research findings.
Presentation by RIN's Director, Michael Jubb, at the Association of Subscription Agents' annual conference in February 2010. http://www.subscription-agents.org/conferences/asa-conference-2010
The document discusses finding biological information through scientific literature and databases. It describes the scientific method and how it relates to information retrieval. It then covers various formats of scientific literature like journals, conference proceedings, magazines, books, and encyclopedias. The document also discusses major biological databases and how to perform effective searches using Boolean logic.
Decomposing Social and Semantic Networks in Emerging “Big Data” ResearchHan Woo PARK
빅데이터가 학문으로 등장한 배경을 잘 정리한 논문
http://www.sciencedirect.com/science/article/pii/S1751157713000473
Park, H.W.@, & Leydesdorff, L. (2013). Decomposing Social and Semantic Networks in Emerging “Big Data” Research. Journal of Informetrics. 7 (3), 756-765. DOI information: 10.1016/j.joi.2013.05.004
The Evolution of e-Research: Machines, Methods and MusicDavid De Roure
The document summarizes the evolution of e-research over three generations from 1981 to the present. The first generation saw early adopters using tools within their disciplines with some reuse. The second generation was characterized by increased reuse of tools, data and methods across areas. The third generation is defined by radical sharing of resources globally across any discipline through social networks and reusable research objects. The document also discusses several specific projects and tools that exemplify each generation of e-research including myExperiment, Galaxy, and SALAMI.
E learning a versatile tool for knowledge managementKishor Satpathy
E-learning is a versatile tool for knowledge management that allows people to learn anytime, anywhere using technology. It has evolved from computer-based training and web-based training to incorporate more collaborative learning approaches. E-learning supports knowledge management by creating a growing repository of knowledge and facilitating its sharing and reuse across organizations. When combined with metadata and semantic tagging, e-learning becomes an effective tool to organize knowledge and support collaboration.
Este documento presenta una introducción al libro "Sociedad del Conocimiento y Educación". Se argumenta que la información, comunicación, educación y conocimiento son esenciales para el progreso de las sociedades. Las tecnologías de la información y comunicación (TIC) tienen un gran impacto en nuestras vidas al superar obstáculos como el tiempo y la distancia. El libro explora cómo estas fuerzas están transformando la educación y la necesidad de adaptar los sistemas educativos a los cambios de la Sociedad del Conocimiento.
The document discusses toothbrushes, including their history, components, types, proper use, and care. Toothbrushes have existed for nearly 5,000 years, starting as chew sticks, and now come in manual and powered varieties. Key parts are the bristle head and handle, which come in different shapes, patterns, and designs. Toothbrushes should be chosen based on size and replaced every 3-4 months to effectively clean teeth and gums.
This document discusses different types of toothbrushes, including classifications based on length, bristle diameter, and head and bristle designs. It also covers modifications to toothbrush handles and heads over time. Sonic and ionic toothbrushes are described as using high frequency vibrations or charged ions, respectively, to aid in plaque and bacteria removal. The document recommends replacing toothbrushes every 3 months and discusses various methods for disinfecting toothbrushes, including chemical treatments and UV light sanitization. Finally, it reviews several studies that have compared the plaque removal effectiveness of worn versus new toothbrushes and high versus low brushing forces.
Delivering online learning - are you ready? - Jisc Digifest 2016Jisc
This session will demonstrate the scaling up online learning diagnostic tool prototype and provide an overview of the new Jisc scaling up online learning guide to help users make the best of both resources.
The diagnostic tool takes users through key questions to help identify their personal readiness for creating, delivering or supporting online learning and provides links to useful resources and guides, based on a user’s results.
Showcasing research data tools - Jisc Digifest 2016Jisc
The document summarizes several projects from Phase 3 of the Research Data Spring initiative. It describes DataVault, a platform for long-term archival of research data. It also discusses DMA Online, a dashboard that aggregates research data management information from multiple sources. Additionally, it outlines Clipper, a tool for creating and sharing clips from audiovisual materials. Finally, it presents a project that aims to incentivize data deposit by enabling researchers to publish "data papers" describing their datasets.
This document provides an overview of toothbrushing history, techniques, and tools. It discusses the evolution of toothbrushes from chewsticks to modern manual and power brushes. Key developments include the first patented brush in 1857 and the introduction of nylon bristles in 1938. The document also examines brush parts, ADA specifications, brushing techniques for cleaning all tooth surfaces, and considerations for special patient needs like orthodontics or periodontal surgery.
This document describes several brushing techniques including the Bass method, Modified Bass method, Modified Stillman's method, Charter's method, the Roll method, Vertical/Leonard's method, Physiologic/Smith method, and the Fones/Circular/Scrub method. The Bass method involves placing the brush bristles at a 45 degree angle to the gingiva and moving them in small circular motions around each tooth. The Modified Bass method uses a sweeping motion from cervical to incisal surfaces. Charter's method positions the bristles toward the chewing surface and angles them at 45 degrees to the tooth while vibrating gently. The Physiologic/Smith method follows the natural pathway of food along tooth surfaces and g
Colgate palmolive the precision toothbrushRajendra Inani
The document discusses Colgate Palmolive's plan to introduce a new toothbrush, the Precision toothbrush, into the market. It analyzes the toothbrush market and identifies a niche for a "super premium" product targeting gum health. It considers mainstream versus niche positioning strategies and recommends a niche strategy to initially target the therapeutic brushing segment. Financial forecasts suggest the niche strategy would be more profitable than mainstream. The implementation plan includes professional endorsements, advertising, competitive pricing, and bundling the toothbrush with a premium toothpaste.
This document discusses mechanical plaque control methods. It describes various toothbrushing techniques like the Bass method and provides specifications for toothbrush design. Interdental cleaning aids like floss and interdental brushes are also discussed. Several studies have found these mechanical methods, like flossing and interdental brushing, to be effective at removing plaque and reducing gingivitis when used properly. Powered toothbrushes have been shown to provide similar benefits to manual brushing for most patients. Proper mechanical plaque removal is important for preventing periodontal disease.
This document provides information on oral health and dentistry. It discusses topics like plaque formation, the benefits of flossing, smoking and gum disease, tooth anatomy, eruption sequences, the connection between oral and full body health, types of tooth decay, gum diseases like gingivitis and periodontitis, childhood dental issues, diseases in adults, cancer prevention and symptoms, proper oral hygiene techniques, dental hygiene, dental visits and mouth care.
26 Disruptive & Technology Trends 2016 - 2018Brian Solis
Introducing the “26 Disruptive Technology Trends for 2016 – 2018.” In this report, we’ll explore some of the disruptive trends that are affecting pretty much everything over the next few years at least those that I’m following. It’s not just tech, though. The report is organized by socioeconomic and technological impact.
Obviously, this is not an exhaustive list of every technology and societal trend bringing about disruption on planet Earth. What follows thought definitely affects the evolution of digital Darwinism, the evolution of society and technology and its impact on behavior, expectations and customs.
Results Vary: The Pragmatics of Reproducibility and Research Object FrameworksCarole Goble
Keynote presentation at the iConference 2015, Newport Beach, Los Angeles, 26 March 2015.
Results Vary: The Pragmatics of Reproducibility and Research Object Frameworks
http://ischools.org/the-iconference/
BEWARE: presentation includes hidden slides AND in situ build animations - best viewed by downloading.
The Future of Research (Science and Technology)Duncan Hull
This document summarizes the key trends in modern scientific research, including the rise of data-intensive science, collaborative and distributed research, and open science. It discusses how research is becoming more data-driven and dependent on large datasets. It also notes the growth of virtual and distributed collaboration between researchers. Finally, it outlines some of the implications for libraries and services to support reproducible, open, and data-driven scientific research.
ContentMining for France and Europe; Lessons from 2 years in UKpetermurrayrust
This document summarizes Peter Murray-Rust's presentation on two years of content mining in the UK and lessons for France and Europe. Some key points discussed include:
- Content mining can save lives by enabling researchers to search literature and find past warnings, as in the case of Ebola.
- However, publishers like Elsevier and Wiley have stopped researchers' content mining efforts, hampering their work.
- France, Europe and the UK must actively support content mining through funding, tools, training and protecting researchers from restrictive publishers.
- Examples are given of ContentMine fellows' projects mining literature on topics like weevil-plant associations, cell migration and depression in animals.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
Biodiversity Informatics: An Interdisciplinary ChallengeBryan Heidorn
"Impacto de la Informática en el Conocimiento de la Biodiversidad: Actualidad y Futuro” at Universidad Nacional de Colombia on August 12, 2011. https://sites.google.com/site/simposioinformaticaicn/home
Open Data in a Big Data World: easy to say, but hard to do?LEARN Project
Presentation at 3rd LEARN workshop on Research Data Management, “Make research data management policies work”
Helsinki, 28 June 2016, by Sarah Callaghan, STFC Rutherford Appleton Laboratory
The document summarizes research on enabling data reuse from published datasets. It reviews 40 papers that cataloged 39 different features of datasets that can enable reuse. These features are grouped into categories related to enabling access, documenting methodological choices and quality, and helping users understand and situate the data. The paper presents a case study analyzing over 1.4 million data files from more than 65,000 repositories on GitHub, relating dataset engagement metrics to various reuse features. Using these metrics as proxies for reuse, an initial deep learning model is developed to predict a dataset's reusability based on its documented features. This work demonstrates the gap between existing principles for enabling reuse and actionable insights that can help data publishers and tools implement functionalities proven
The document discusses reproducible bioscience data. It describes Susanna-Assunta Sansone as a principal investigator and team leader at the University of Oxford e-Research Centre who gives a presentation on policies, communities, and standards around reproducible bioscience data. The presentation covers topics like preserving institutional memory, utilizing public data, and addressing reproducibility and reuse of public data through community standards and structured data annotation.
Science is a systematic process of building and organizing knowledge through testable explanations and predictions, while technology is the application of tools and techniques to solve problems or achieve goals. Together, science and technology underpin all aspects of modern life and have transformed how we live, work, travel, communicate, and access healthcare and information. We rely on the contributions of science and technology even when performing everyday tasks like turning on lights or getting a glass of water.
The document discusses two case studies of scientific research projects - one tracking marine mammals over 40 years, and the other studying genetic factors in bipolar disorder over 20 years. Both projects grew significantly in size and scope over time. This led to challenges in organizing and managing the large amounts of data collected in a way that was compatible, standardized, and accessible to collaborators. The researchers received training in conducting scientific tasks but not in systematically organizing information on a large scale. The document examines issues that arise when small projects expand and ways to help scientists address data management challenges as projects increase in scale and collaboration.
Social Machines of Science and ScholarshipDavid De Roure
1. The document discusses how digital research is done today through social objects and social machines, which are computationally-enabled networks of expertise, data, models and narratives that support collaborative digital research.
2. Social machines allow casts of thousands to work together on research through participatory platforms and automated processes.
3. Examples of social machines include myExperiment, citizen science projects, and scholarly ecosystems that facilitate collaboration through digital artifacts like research objects, workflows and linked data.
There is an abundance of free online tools accessible to scientists and others that can be used for online networking, data sharing and measuring research impact. Despite this, few scientists know how these tools can be used or fail to take advantage of using them as an integrated pipeline to raise awareness of their research outputs. In this article, the authors describe their experiences with these tools and how they can make best use of them to make their scientific research generally more accessible, extending its reach beyond their own direct networks, and communicating their ideas to new audiences. These efforts have the potential to drive science by sparking new collaborations and interdisciplinary research projects that may lead to future publications, funding and commercial opportunities. The intent of this article is to: describe some of these freely accessible networking tools and affiliated products; demonstrate from our own experiences how they can be utilized effectively; and, inspire their adoption by new users for the benefit of science.
The document discusses the evolution of science and research from the 1940s to present day. It notes Vannevar Bush's 1945 concerns about the growing mountain of research that scientists did not have time to fully understand or remember. It then discusses the current "data explosion" and challenges of accessing, sharing, and building on increasingly large amounts of data and research. The document advocates for reusable, reproducible, and transparent science through connected resources and environments that facilitate collaboration and knowledge sharing.
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuousIntegration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of contentmining (TDM)
Jean-Claude Bradley presents on "Peer Review and Science2.0: blogs, wikis and social networking sites" as a guest lecturer for the “Peer Review Culture in Scholarly Publication and Grantmaking” course at Drexel University. The main thrust of the presentation is that peer review alone is not capable of coping with the increasing flood of scientific information being generated and shared. Arguments are made to show that providing sufficient proof for scientific findings does scale and weakens the tragedy of the trusted source cascade.
Open access to research has been shown to accelerate the research cycle and increase citations and usage of articles. In high-energy physics, researchers have openly shared preprints for decades through arXiv, allowing findings to be rapidly built upon. Analysis of arXiv usage shows the time between preprint posting and citation has significantly decreased as open access has increased. Studies also consistently find that open access articles receive more citations, with some seeing a 600% increase, than articles hidden behind paywalls. However, journal impact factors should not be used to evaluate individual researchers or papers, as they measure prestige rather than use or quality.
Emerging Scholarly Practice and Scholarly Primitives: a Case Study in Music a...David De Roure
The document summarizes David De Roure's talk on emerging scholarly practices involving digital scholarship, computation, and artificial intelligence (AI) techniques in music analysis and composition. It discusses how the digital musicology community has adopted new research methods using digital technologies and how music researchers are increasingly using AI. It provides examples of collaborations between humans and machines in music classification and composition.
Keynote talk on "Music in the Archives: Digital Musicology as a case study in Computational Archival Science" by David De Roure, for the workshop on "Computational Archival Science: digital records in the age of big data" at IEEE Big Data 2020, 11 December 2020.
Lightning talk opening the "Building a Digital Research Infrastructure" workshop at The National Archives, 10 January 2020. Based on Nov 2019 DCDC keynote "Digital Scholarship: Intersection, Automation, and Social Machines".
Alter: an ensemble work composed with and about AIDavid De Roure
Alter is an ensemble work composed collaboratively with and about artificial intelligence. It traces the development of an artificial mind in three phases, from an initial unclear conception to a complex and creative self, by having the AI dive into its own code between phases to retrain itself. The text is entirely written by an AI that learns from Ada Lovelace's correspondence and then wider 19th century writing and finally the internet, reflecting the data science behind its production. The work was commissioned by Barbican and performed by Britten Sinfonia in November 2019, paying tribute to Lovelace's scientific imagination.
Digital Scholarship: Intersection, Automation, and Scholarly Social MachinesDavid De Roure
Keynote talk at DCDC 2019, Birmingham, November 2019. The theme of the conference was "Navigating the digital shift: practices and possibilities". The talk presents six short stories of my journeys in the evolving knowledge infrastructure. Thank you to all my fellow travellers and guides. (The slides all have a black strip of 2 or 3 lines at the top - this was for live captioning.)
Lovelace’s Legacy: Creative Algorithmic Interventions for Live PerformanceDavid De Roure
By David De Roure, Pip Willcox, Alan Chamberlain.
Paper presented at the workshop "The Design of Future Music Technologies: ‘Sounding Out’ AI, Immersive Experiences & Brain Controlled Interfaces" held in conjunction with Audio Mostly 2018 (AM'18), September 12–14, 2018, Wrexham, UK.
https://doi.org/10.1145/3243274.3275380
Experimental Humanities: An Adventure with Lovelace and BabbageDavid De Roure
"Experimental Humanities: An Adventure with Lovelace and Babbage" by David De Roure and Pip Willcox, University of Oxford. Paper presentation at 13th IEEE eScience Conference, Auckland, New Zealand, 25 October 2017.
Abstract: "The development and innovative application of digital research methods in humanities disciplines, characterised as Digital Humanities or e-Humanities, is an established feature of the e-Science and e-Research landscape. Typically these digital methods enable existing research questions to be tackled in new ways, at a scale and speed that transcend manual methods. In this paper we present a different approach to the application of digital techniques to humanities research, a branch of experimental humanities in which digital experiments bring insight and engagement with historical scenarios and in turn influence our understanding and our thinking today. We illustrate this through a series of experiments and demonstrations inspired by the work of Ada Lovelace and Charles Babbage, including simulation of the Analytical Engine, use of a web-based music application, construction of hardware, and reproduction of earlier mathematical results using contemporary computational methods."
Opening keynote talk at 11th eResearch Australasia Conference, Brisbane Convention and Exhibition Centre, 16 – 20 October 2017. Based in part on public lecture "The Imagination of Ada Lovelace" on Ada Lovelace day at ANU, slides co-authored with Pip Willcox.
This document provides an overview of an experimental humanities approach to exploring the imagination of Ada Lovelace. It includes biographical information on Lovelace and her collaborator Charles Babbage, as well as quotes and commentary relevant to understanding her work and vision. The document also describes efforts to recreate Lovelace's ideas digitally through projects like Numbers Into Notes, which maps mathematical sequences to music. The goal is to generate and test hypotheses about Lovelace's computational concepts using modern digital tools and design practices.
Despite many attempts to perturb a scholarly publishing system that is over 350 years old, it feels pretty much like business as usual. I argue that we have become trapped inside the machine, and if we want to change it in an informed way we need to step outside and take a look. First I describe my lens—what I mean by a social machine, and the scholarly social machines ecosystem.
I close with a list of questions that could be workshop discussion points. Presented at the ESWC 2017 Workshop on Enabling Decentralised Scholarly Communication, Portorož - Portorose, May 2017.
This article is a response to the Call for Linked Research. The essay is currently available on www.oerc.ox.ac.uk/sites/default/files/users/user384/scholarly-social-machines.html
This document discusses social machines and how to study them. It begins with definitions of social machines and discusses empowered citizens and studying social machines. It presents scholarly social machines and social platforms. It discusses the internet of things and concludes with "Sociam GO!" emphasizing the study of social machines.
Keynote talk for NCRM Stream Analytics workshop, 19 January 2017, Manchester.
My talk is called "New and Emerging Forms of Data: Past, Present, and Future” and I will be giving a perspective from my role as one of the ESRC Strategic Advisers for Data Resources, in which I was responsible for new and emerging forms of data and realtime analytics. The talk also includes some of the current work in the Oxford e-Research Centre on Social Machines (the SOCIAM project) and an introduction to the PETRAS Internet of Things project.
The talk raises a number of important issues looking ahead, including massive scale of data that is already being supplied by Internet of Things, the implications of automation in our research, reproducibility and confidence in research results. I will also ask, how can the new forms of data and new research methods enable social scientists to work in new ways, and can we move on from the dependence on the traditional investment in longitudinal studies?
Plans and Performances: Parallels in the Production of Science and Music, by David De Roure, Graham Klyne, Kevin R. Page, John Pybus, David M. Weigl, Matthew Wilcoxson, and Pip Willcox. Presented at IEEE e-Science 2016, Baltimore, 25 October 2016
"On the Description of Process in Digital Scholarship" Paper at the 1st Workshop on Humanities in the SEmantic web (WHiSE 2016) colocated with ESWC 2016, Heraklion, Crete, Sunday 29 May 2016
Panel position for "10 Years of Web Science" panel at ACM Web Science 2016, Hannover, Germany, Monday 23 May 2016, with panellists:
Steffen Staab, Universität Koblenz-Landau & University of Southampton (chair)
David De Roure, Oxford e-Research Centre, University of Oxford
Susan Halford, University of Southampton
Anni Rowland-Campbell, Intersticia, Web Science Trust & Web Science Institute
Jim Hendler, Rensselaer Polytechnic Institute
"'Tis true. There's magic in the Web: The Short and the Long of Co-Creation, Web Science, and Data Driven Innovation". Keynote for the DATA-DRIVEN INNOVATION WORKSHOP 2016 collocated with ACM Web Science 2016, Hannover, Germany, Sunday 22 May 2016
This document discusses the ethics of increasing automation and the evolving knowledge infrastructure. It notes challenges around trusting automated analysis with no human in the loop, and understanding complex and evolving data sources. Breakout groups are proposed to discuss interventions for improving research quality, issues around reproducibility with different data types, and the impact of large online data on reproducibility in social sciences. The document references challenges around safety vs security, and tradeoffs between hardening systems and adaptive response.
Opening talk at the "Interdisciplinary Data Resources to Address the Challenges of Urban Living” Workshop at the Urban Big Data Centre, University of Glasgow, 4 April 2016
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
8. 1 st Generation Summary Current practices of early adoptors of tools. Characterised by researchers using tools within their particular problem area, with some re-use of tools, data and methods within the discipline. Traditional publishing is supplemented by publication of some digital artefacts like workflows and links to data. Science is accelerated and practice beginning to shift to emphasise in silico work.
17. Results Logs Results Metadata Paper Slides Feeds into produces Included in produces Published in produces Included in Included in Included in Published in Workflow 16 Workflow 13 Common pathways QTL Paul’s Pack Paul’s Research Object
18.
19. 2 nd Generation Summary Projects delivering now. Some institutional embedding. Key characteristic is re-use - of the increasing pool of tools, data and methods across areas/disciplines. Contain some freestanding, recombinant, reproducible research objects. New scientific practices are established and opportunities arise for completely new scientific investigations. Some expert curation.
23. Digital Music Collections Crowdsourced ground truth Community Software Linked Data Repository Supercomputer Structural Analysis of Large Amounts of Music Information
24.
25. 3 rd Generation Summary The solutions we'll be delivering in 5 years Characterised by global reuse of tools, data and methods across any discipline, and surfacing the right levels of complexity for the researcher. Routine use. Key characteristic is radical sharing . Research is significantly data driven - plundering the backlog of data, results and methods. Increasing automation and decision-support for the researcher - the VRE becomes assistive. Curation is autonomic and social.
26.
27.
Editor's Notes
Today I’m going to talk about the trajectory of e-Science – from its conception through examples of 3 generations, and I’ll reflect on how we are moving from generation 2 to generation 3. Different disciplines and especially communities may be in different stages of evolution.
First something about words. This definition of e-Science is important – it reminds us that it isn’t just about technology but about people working together and being empowered by technology – and the emphasis on “science” reminds us that ultimately success is measured by new scientific outcome. At the turn of the decade this was a vision of the future. A programme was created called e-Science. The projects doing the innovation were labelled as “e-Science”. By the time we arrive, it’s just “science”. So “e-Science” has become the name of the journey rather than the destination. Note that the innovation that takes us to the destination isn’t solely in the custody of e-Science projects – there’s a lot of relevant work going on that doesn’t carry that label. Note also that when we say “e-Science” we actually mean “e-Research”! We sometimes forget to say that.
e-Science is often characterised as dealing with the data deluge – especially from new experimental techniques such as combinatorial chemistry, DNA microarrays, instruments, sensor networks, earth observation – even facebook (which I see as a kind of large hadron collider – or large people collider - of social science) as well as digitisation programmes and release of existing data (e.g. open government data) or new modes of access to secure data. Researchers are working digitally. The data deluge is caused by, and needs to be handled by, automation. The trick with automation is getting the right balance of “human in the loop” so that researcher can do what they’re very good at while machines do what machines are very good at. BTW Note the cocktail on the form in this slide!
Scientific workflow systems are a key automation technique for systematically handling the data deluge and giving us the “workflow” as a new sharable artefact of digital science – to record, repeat, reproduce and repurpose an experiment. This is an iconic slide by Carole Goble which is much repeated, reproduced and repurposed!
As keen observers of the e-Research ecosystem (as I’m sure we all are!) it’s interesting to note just how many workflow systems there are. This isn’t bad – each one comes prepackaged to solve particular problems for particular research communities. This is a good thing – it’s about adoption, about doing the specific before the generic. It shows co-evolution in action – successful e-Research isn’t about technology impacting research, it’s about technology being harnessed by researchers. Note Computer scientists in the audience may feel an urge to build a generic workflow language so that these systems can inter-operate. As it happens, workflows by their very nature plug together pretty well anyway – calling each other as services, or piping data from one to another.
Some co-evolution in action. In CombeChem I didn’t get requirements and go away and design a system that nobody wanted. We empowered some chemists to harness the technology – in this case Semantic Web. We “went on the journey” with them. They have done cool stuff! Semantic lab books, publication at source (e-crystals then blogging the lab), semantically enhanced publications. And a neat units ontology.
This is a summary of the phase we have been describing. The text on my summary slides has evolved but was originally based on the work of the e-Laboratories group at Manchester University (cf collaboratory or Virtual Research Environment) – I believe this framework to be more generally applicable, as you’ll see in this talk.
What we didn’t see much in phase 1 was sharing and reuse, but this is essential to harnessing of the new technology. The story on this slide involves sharing in a corridor and we will go on to see how we do it digitally! But it’s an important motivation. It led to new science.
The problem with sharing is that scientists are selfish – not so much e-Science as “me-Science”!
Heard this one? :-)
So we created “myExperiment” to find out whether scientists do indeed share enough to enjoy the benefits. New Scientist called it “mySpace for Scientists” (and my daughter called it mySpace for Science homework) but alas mySpace was soon passé, so it rapidly became Facebook for scientists. But that was a deterrent to uptake, because it was perceived to imply no privacy. So it’s not facebook! Incidentally our astronomy colleagues picked up the idea to create “Spacebook” :-)
How we actually describe myExperiment of course depends on our audience, and there are things of interest to many people. It’s like the blind monks and the elephant. Apologies to repositories colleagues in the audience for putting them at the tail end!
myExperiment in one slide! It’s a “boutique” Web site with the largest public collection of scientific workflows. For lots more information see the myExperiment wiki http://wiki.myexperiment.org/ BioCatalogue is a registry of Web Service in the life sciences and is directly based on the myExperiment experience. Sysmo and Methodbox grew from the myExperiment codebase – methodbox is an e-Social Science e-Laboratory for sharing and analysing data, and sysmo is customised to the systems biology domain. See http://www.biocatalogue.org/ http://www.methodbox.org/ http://www.sysmo-db.org/
My example screenshot page today isn’t a Taverna workflow but is another example of co-evolution. This is a nimrod workflow, and it’s on the Australian instance of myExperiment. We don’t mandate how people use myExperiment, we empower and watch and learn! One of the distinctives is the yellow strip – the “social metadata”... Licenses, credits, attribution. Without this scientists wouldn’t use it.
Lots of people focus on data (after all, there is a deluge!). Another important distinctive of myExperiment is that we have focused on sharing workflows (specific first – we focus on workflows like movies on youtube or photos on flickr) – or more generally on methods (sharing “know-how” ). If there is a data deluge then surely methods for handling and analysing it are just as important as the data?
This is reflected in a third distinctive – the pack. This is Paul Fishers pack from the Tryps example. Some packs contain example input and output data so workflows can be checked for “decay” (they don’t actually rot, but the world changes round them). While others are looking at semantically enhanced publication, we are asking “what is the shared artefact of future research?” We come at the same problem from the other side. We have it surrounded! Our approach relieves us of the paper mindest – so, for example, a Research Object could contain information for many audiences and purposes, with a commonly interpreted core (social scientists will recognise the idea of a “boundary object”).
None of this would be relevant if we weren’t seeing new science coming out – and we are. This example involves a microscope – back to our earlier instruments and automation theme – and a Kepler workflow which is shared on myexperiment.org.au and is in routine use.
This is pretty much where we are now!
Now we look at myExperiment as a probe into the future behaviour of researchers. For example, these workflows by Francois Belleau show what could be described as another level of working – building on the new tooling.
Here we see bioinformaticians assembling the resources they need to answer a research question – and also demonstrating what the methods section of the future paper needs to look like. They are using Linked Data. We see the power – ease of assembly. This could be where the new computer science challenges lie in e-Research.
To show it isn’t just bioinformaticians, here are Computational Musicologists doing a similar thing. Here the “signal” is digital music recordings, and the research question relates to country music!
That example comes from a Digging into Data project with the best project acronym ever. The projects is conducting a massive structural analysis of music in the internet archibe, to support musicologists. It illustrates many of the things we are now seeing in e-Research – crowdsourcing, annotation, community software development, high performance computation, data publication. This project involves UIUC, McGill and Oxford – and the supercomputer time is donated by NCSA.
We’ve seen digital humanities, let’s look briefly at e-social science – or rather, “Digital Social Research” (the name of the destination not the journey!) In social science we have more data than ever before but not collected for social science research per se – it’s fit for a different purpose. This brings a set of challenges, from statistics to ethics. We also have more capability than ever before, as illustrated in this talk. We believe the trick (again) is to focus on “methods” – the training and capacity building in the next generation of researchers. Social Science has another important angle – the social science study of e-Research itself. Many useful studies are now emerging.
Once the technologies are established and adopted we can realise the benefits of sharing – not just in “big science” but in everyday research. Collections like myExperiment enable new forms of analysis – of patterns of methods for example.
What we have seen throughout this talk is co-evolution or co-design in action. Or – more words – co-constitution. For computer scientists let’s just say co-* :-) A year ago I did a tour of the US with Malcolm Atkinson and we introduced two metaphors which have become “memes”: Intellectual access ramps, of which workflow systems and myExperiment are examples, enable incremental engagement – rather thank jumping straight into the fast lane! They are for scientists but also developers and research technologists. - Datascopes. These are the assemblies of tools that take us from signal to understanding. They are scientific instruments which equally support humanists. We hope they will change our understanding of our place in the universe.