Decentralized Clinical Trials, presentaiton by Craig Lipset for mHealth IsraelLevi Shapiro
Decentralized Clinical Trials, presentaiton by Craig Lipset for mHealth Israel, April 20, 2021. Origin Story: Centralization Enables Decentralization. Analogous potential for centralization
leading to decentralization in clinical trials. Decentralization: Purpose and potential benefits, including resilience and
business continuity. Pre-Pandemic DCT Timeline: 17-year History Prior to COVID-19. Seasons of Decentralization in 2020. Spring of Continuity, Summer of Restarts, Fall of Commitment, Winter of Pathways to Scale. 79% of sponsors / CROs increasing DCT. 90% of participants experiencing change. 75% focus on going hybrid. 73% of Sites Will continue to use telemedicine beyond the pandemic. 76% have accelerated their DCT Strategies.Leading Implementation Strategy: Pairing DCT Toolkit to Study Needs. Identify the decentralized research methods and tools needed by the medicine portfolio. Ensure aligned SOPs & training, identify new partners, modify protocols/templates. Pair the “right” method/tool to each study
based upon diverse criteria. Barriers to Scaled Adoption of Decentralized Trials: Regulatory ambiguity, Global variability, Technology interop & data flow, Investigator & patient readiness, Endpoint limitations, Organization culture. Forecasts and Futures. Choice & Flexibility for Participants on a Visit-by-Visit Basis. Research Sites Empowered to Use Their Existing Technology. New Opportunities to Engage Treating Physicians Enables Research as a Care Option. Observational
“All-Comer” Studies and Platform Trials with DCT Bring Research to People.
I explain plainly what is salami silcing, a practice of fragmenting single research into as many publications as possible. Salami publishing and hazards
Database Designing in Clinical Data ManagementClinosolIndia
When designing a Clinical Data Management (CDM) database, several key considerations should be taken into account to ensure efficient data capture, storage, and retrieval. Here are some important aspects to consider in CDM database design:
Define Study Requirements:
Understand the specific requirements of the study and the data to be collected. This includes variables, data types, formats, and any specific rules or calculations required for data validation and derivation. Consult with the study team and stakeholders to determine the necessary data elements.
Data Model Design:
Develop a data model that represents the structure and relationships of the data. Use standard data models, such as CDISC (Clinical Data Interchange Standards Consortium) standards, as a foundation. Define entities (e.g., patients, visits, assessments) and attributes (e.g., demographics, lab results) and establish relationships between them.
Data Dictionary:
Create a comprehensive data dictionary that provides a detailed description of each data element, including its name, definition, data type, length, format, allowable values, and any validation or derivation rules. The data dictionary serves as a reference for data entry and validation checks.
Database Schema:
Design the database schema based on the data model and data dictionary. Identify the tables, fields, and relationships needed to store the data. Determine primary and foreign keys to establish relationships between tables. Normalize the schema to reduce redundancy and improve data integrity.
Data Capture Forms:
Design user-friendly data capture forms to facilitate efficient and accurate data entry. Align the form layout with the data model and data dictionary. Include necessary data validation checks and provide clear instructions or prompts for data entry.
Data Validation and Quality Checks:
Incorporate data validation checks to ensure data accuracy and completeness. Implement range checks, format checks, consistency checks, and logic checks to identify and prevent data entry errors. Include data quality control processes to identify and resolve data discrepancies or anomalies.
Security and Access Controls:
Implement appropriate security measures to protect the confidentiality, integrity, and availability of the data. Define user roles and access levels to control data access and modification. Employ encryption, authentication, and audit trails to ensure data security and compliance with regulatory requirements.
Data Extraction and Reporting:
Consider the need for data extraction and reporting capabilities. Design mechanisms to extract data from the database for analysis or reporting purposes. Implement data export functionalities in commonly used formats, such as CSV or Excel, or integrate with reporting tools or systems.
Decentralized Clinical Trials, presentaiton by Craig Lipset for mHealth IsraelLevi Shapiro
Decentralized Clinical Trials, presentaiton by Craig Lipset for mHealth Israel, April 20, 2021. Origin Story: Centralization Enables Decentralization. Analogous potential for centralization
leading to decentralization in clinical trials. Decentralization: Purpose and potential benefits, including resilience and
business continuity. Pre-Pandemic DCT Timeline: 17-year History Prior to COVID-19. Seasons of Decentralization in 2020. Spring of Continuity, Summer of Restarts, Fall of Commitment, Winter of Pathways to Scale. 79% of sponsors / CROs increasing DCT. 90% of participants experiencing change. 75% focus on going hybrid. 73% of Sites Will continue to use telemedicine beyond the pandemic. 76% have accelerated their DCT Strategies.Leading Implementation Strategy: Pairing DCT Toolkit to Study Needs. Identify the decentralized research methods and tools needed by the medicine portfolio. Ensure aligned SOPs & training, identify new partners, modify protocols/templates. Pair the “right” method/tool to each study
based upon diverse criteria. Barriers to Scaled Adoption of Decentralized Trials: Regulatory ambiguity, Global variability, Technology interop & data flow, Investigator & patient readiness, Endpoint limitations, Organization culture. Forecasts and Futures. Choice & Flexibility for Participants on a Visit-by-Visit Basis. Research Sites Empowered to Use Their Existing Technology. New Opportunities to Engage Treating Physicians Enables Research as a Care Option. Observational
“All-Comer” Studies and Platform Trials with DCT Bring Research to People.
I explain plainly what is salami silcing, a practice of fragmenting single research into as many publications as possible. Salami publishing and hazards
Database Designing in Clinical Data ManagementClinosolIndia
When designing a Clinical Data Management (CDM) database, several key considerations should be taken into account to ensure efficient data capture, storage, and retrieval. Here are some important aspects to consider in CDM database design:
Define Study Requirements:
Understand the specific requirements of the study and the data to be collected. This includes variables, data types, formats, and any specific rules or calculations required for data validation and derivation. Consult with the study team and stakeholders to determine the necessary data elements.
Data Model Design:
Develop a data model that represents the structure and relationships of the data. Use standard data models, such as CDISC (Clinical Data Interchange Standards Consortium) standards, as a foundation. Define entities (e.g., patients, visits, assessments) and attributes (e.g., demographics, lab results) and establish relationships between them.
Data Dictionary:
Create a comprehensive data dictionary that provides a detailed description of each data element, including its name, definition, data type, length, format, allowable values, and any validation or derivation rules. The data dictionary serves as a reference for data entry and validation checks.
Database Schema:
Design the database schema based on the data model and data dictionary. Identify the tables, fields, and relationships needed to store the data. Determine primary and foreign keys to establish relationships between tables. Normalize the schema to reduce redundancy and improve data integrity.
Data Capture Forms:
Design user-friendly data capture forms to facilitate efficient and accurate data entry. Align the form layout with the data model and data dictionary. Include necessary data validation checks and provide clear instructions or prompts for data entry.
Data Validation and Quality Checks:
Incorporate data validation checks to ensure data accuracy and completeness. Implement range checks, format checks, consistency checks, and logic checks to identify and prevent data entry errors. Include data quality control processes to identify and resolve data discrepancies or anomalies.
Security and Access Controls:
Implement appropriate security measures to protect the confidentiality, integrity, and availability of the data. Define user roles and access levels to control data access and modification. Employ encryption, authentication, and audit trails to ensure data security and compliance with regulatory requirements.
Data Extraction and Reporting:
Consider the need for data extraction and reporting capabilities. Design mechanisms to extract data from the database for analysis or reporting purposes. Implement data export functionalities in commonly used formats, such as CSV or Excel, or integrate with reporting tools or systems.
Clinical Data Management (CDM) is a critical component of clinical research that involves the collection, cleaning, validation, and management of clinical trial data to ensure its accuracy, integrity, and compliance with regulatory requirements. The workflow of CDM typically consists of several key stages, each with specific activities and processes. Here is an overview of the typical workflow of CDM:
Study Startup:
Protocol Review: CDM teams begin by reviewing the clinical trial protocol to understand the study's objectives, endpoints, data collection requirements, and timelines.
Database Design: Based on the protocol, the team designs a data capture system or electronic data capture (EDC) system. This includes creating data entry forms, defining data validation checks, and setting up data dictionaries.
Data Collection:
Case Report Form (CRF) Design: CDM professionals design electronic or paper CRFs to collect data during the trial. CRFs capture specific data points required by the protocol.
Data Entry: Data is entered into the CRFs, either electronically by site personnel or through paper CRFs.
Data Validation: CDM teams implement validation checks to ensure data quality and consistency. Data validation checks may include range checks, consistency checks, and logic checks.
Query Management: Queries are generated when data discrepancies or inconsistencies are identified. CDM teams send queries to investigational sites for resolution.
Data Cleaning and Quality Control:
Data Cleaning: Data are cleaned to resolve discrepancies, discrepancies, and inconsistencies. This involves querying data discrepancies with clinical trial sites.
Data Review: CDM teams review data to ensure completeness and accuracy, and any outstanding queries are resolved.
Quality Control: Quality control processes are applied to verify the integrity and accuracy of data.
Database Lock:
Once the data are cleaned, reviewed, and validated, the database is locked, indicating that no further changes can be made to the data. Database lock is a critical step before data analysis begins.
Data Export and Analysis:
Data is exported from the database and provided to biostatisticians and researchers for statistical analysis. This analysis is conducted to determine the study's outcomes, efficacy, and safety profile.
Data listings, summaries, and tables are generated for regulatory submissions, reports, and publications.
Final Study Reporting:
After data analysis, CDM teams contribute to the preparation of final study reports, which provide a comprehensive overview of the trial's results, data quality, and regulatory compliance.
Archiving and Documentation:
Clinical trial data, documentation, and databases are archived to ensure their long-term availability for regulatory audits and future reference.
Regulatory Submission: CDM teams provide support for regulatory submissions.
Redundant, Duplicate and Repetitive publications are the most important concerns in the scientific research/literature writing. The occurrence of redundancy affects the concepts of science/literature and carries with it sanctions of consequences. To define this issue is much challenging because of the many varieties in which one can slice, reformat, or reproduce material from an already published study. This issue also goes beyond the duplication of a single study because it might possible that the same or similar data can be published in the early, middle, and later stages of an on-going study. This may have a damaging impact on the scientific study/literature base. Similar to slicing a cake, there are so many ways of representing a study or a set of data/information. We can slice a cake into different shapes like squares, triangles, rounds, or layers. Which of these might be the best way to slice a cake? Unfortunately, this may be the wrong question. The point is that the cake that is being referred to, the data/ information set or the study/findings, should not be sliced at all. Instead, the study should be presented as a whole to the readership to ensure the integrity of science/technology because of the impact that may have on patients who will be affected by the information contained in the literature/findings. Redundant, duplicate, or repetitive publications occur when there is representation of two or more studies, data sets, or publications in either electronic or print media. The publications can overlap partially or completely, such that a similar portion, major component(s), or complete representation of a previously/simultaneous ly or future published study is duplicated.
SALAMI SLICING: The slicing of research publication that would form one meaningful paper into several different papers is known as salami publication or salami slicing. Unlike duplicate publication, which involves reporting the exact same data in two or more publications, salami slicing involves breaking up or segmenting a large study into two or more publications. These segments are called slices of a study. As a general rule, as long as the slices of a broken-up study share the same hypotheses, population, and methods, this is not acceptable in general practice. The same slice should never be published more than once at all. According to the United States Office of Research Integrity (USORI), salami slicing can result in a distortion of the literature/findings by leading unsuspecting readers to believe that data presented in each salami slice (journal article) is derived from a different subject sample/source. Somehow this practice not only skews the scientific database but it creates repetition to waste reader's time as well as the time of editors and peer reviewers, who must also handle each paper separately.
One of the most important research ethical issues that should be taken into consideration is “scientific misconduct” such as fabrication, falsification and plagiarism. Plagiarism can occur at any stage of the research activities such as reporting, communicating, authoring, and peer review. The purpose of this workshop is to engage researchers in their responsibility to conduct an ethical research.
Visit:www.acriindia.com
ACRI is a leading Clinical data management training Institute in Bangalore India.
ACRI creates a value add for every degree. Our PGDCRCDM course is approved by the Mysore University. Graduates and Post Graduates and even PhDs have trained with us and got enviable positions in the Clinical Research Industry. ACRI supplements University training with Industry based training, coupled with hands-on internships and projects based on real case studies. The ACRI brand gives the individual the confidence and expertise to join the ever-growing workforce both in the country and abroad.
Workshop Part 2: Publication Ethics for Biomedical Researchers (BioMed Centra...balaram_biomedcentral
The second presentation in the 2015 BioMed Central author workshop presented at institutions in Brazil.
In this segment, Dr. Maria Kowalczuk, Biology Editor, shares information on research ethics and publication ethics, drawing from her experience as a member of the BioMed Central Research Integrity Group.
The aim of this talk is to discusses some of the ethical issues that can arise during scientific publication and the peer review process and discusses their implications. The presentation covers several issue including the scientific publication ethics, misconduct, integrity of the research, authorship and peer review ethics as well as Committee on publication Ethics (COPE) ,
Bioprinting was defined as the use of material transfer processes for patterning and assembling biologically relevant materials- molecules, cells, tissues, and biodegradable biomaterials with a prescribed organization to accomplish one or more biological function. This is a developmental biology- inspired approach to tissue engineering and is based on the assumption that tissues and organs are self- organizing systems, and that cells and especially micro tissues can undergo biological self- assembly and self- organization without any external influence in the form of instructive, supporting and directing rigid templates or solid scaffolds.
Bioprinting or the biomedical application of rapid prototyping, also defined as layer- by- layer additive biomanufacturing, is an emerging transforming biomimetic technology that has potential for surpassing traditional solid scaffold- based tissue engineering. It is a rapid prototyping technology based on three dimensional, automated, computer-aided deposition of ‘‘bioink particles’’ (multicellular spheroids) into a ‘‘biopaper’’ (biocompatible gel; e.g. collagen) by a bioprinter
Clinical Research Statistics for Non-StatisticiansBrook White, PMP
Through real-world examples, this presentation teaches strategies for choosing appropriate outcome measures, methods for analysis and randomization, and sample sizes as well as tips for collecting the right data to answer your scientific questions.
Have full fleged clinical trial data management systems which bring them a good amount of business and revenue.
CDM is a fundamental process which controls data accuracy of each trial besides helping the timelessness to be achieved.
It helps in linking clinical research co-ordinator = who monitor all the sites & collects the data.
it Links with biostatisticians = who analyze, interpret and report data in clinically meaningful way.
It has been expleined in these slides that how 3D bioprinters work and some of them have been introdused. Also some examples of use 3D bioprinter in reality are introduced.
Finally feature of 3D bioprinters in human life has been explained.
Electronic Data Capture & Remote Data CaptureCRB Tech
CRB Tech is one of the best leading Software Development Company in Pune. We are offering Software Development Services as well as IT Training including Java, Dot Net, SEO and Clinical Research training in pune.
Writing the results section for scientific publicationAshok Pandey
To introduce participants to the details of communication and writing scientific papers.
To guide researchers in the writing of scientific paper to increase its acceptability for publication in a journal; and
To upgrade the pre-existing knowledge of writing skills in a scientific manner.
Presented at useR! 2014, July 2, 2014.
The R ecosystem is in a state of near constant change. While a new version of the R engine is now released just once a year, 2-3 patches are usually released in the interim. On top of that, new versions of R packages on CRAN are released at rate of several per day (and that’s not counting packages that are part of the BioConductor project or hosted elsewhere on the Web).
While this rapid change is a boon for the advancement of R, it can cause problems for package authors[1] and also for scientists and their peers who may need to reliably reproduce the results of an R script (possibly dependent on a number of packages) months or even years down the line. In this talk we propose a downstream distribution of CRAN packages that provides for the reproducibility of R scripts and reduces the impact of dependencies for packages authors.
Results Vary: The Pragmatics of Reproducibility and Research Object FrameworksCarole Goble
Keynote presentation at the iConference 2015, Newport Beach, Los Angeles, 26 March 2015.
Results Vary: The Pragmatics of Reproducibility and Research Object Frameworks
http://ischools.org/the-iconference/
BEWARE: presentation includes hidden slides AND in situ build animations - best viewed by downloading.
Clinical Data Management (CDM) is a critical component of clinical research that involves the collection, cleaning, validation, and management of clinical trial data to ensure its accuracy, integrity, and compliance with regulatory requirements. The workflow of CDM typically consists of several key stages, each with specific activities and processes. Here is an overview of the typical workflow of CDM:
Study Startup:
Protocol Review: CDM teams begin by reviewing the clinical trial protocol to understand the study's objectives, endpoints, data collection requirements, and timelines.
Database Design: Based on the protocol, the team designs a data capture system or electronic data capture (EDC) system. This includes creating data entry forms, defining data validation checks, and setting up data dictionaries.
Data Collection:
Case Report Form (CRF) Design: CDM professionals design electronic or paper CRFs to collect data during the trial. CRFs capture specific data points required by the protocol.
Data Entry: Data is entered into the CRFs, either electronically by site personnel or through paper CRFs.
Data Validation: CDM teams implement validation checks to ensure data quality and consistency. Data validation checks may include range checks, consistency checks, and logic checks.
Query Management: Queries are generated when data discrepancies or inconsistencies are identified. CDM teams send queries to investigational sites for resolution.
Data Cleaning and Quality Control:
Data Cleaning: Data are cleaned to resolve discrepancies, discrepancies, and inconsistencies. This involves querying data discrepancies with clinical trial sites.
Data Review: CDM teams review data to ensure completeness and accuracy, and any outstanding queries are resolved.
Quality Control: Quality control processes are applied to verify the integrity and accuracy of data.
Database Lock:
Once the data are cleaned, reviewed, and validated, the database is locked, indicating that no further changes can be made to the data. Database lock is a critical step before data analysis begins.
Data Export and Analysis:
Data is exported from the database and provided to biostatisticians and researchers for statistical analysis. This analysis is conducted to determine the study's outcomes, efficacy, and safety profile.
Data listings, summaries, and tables are generated for regulatory submissions, reports, and publications.
Final Study Reporting:
After data analysis, CDM teams contribute to the preparation of final study reports, which provide a comprehensive overview of the trial's results, data quality, and regulatory compliance.
Archiving and Documentation:
Clinical trial data, documentation, and databases are archived to ensure their long-term availability for regulatory audits and future reference.
Regulatory Submission: CDM teams provide support for regulatory submissions.
Redundant, Duplicate and Repetitive publications are the most important concerns in the scientific research/literature writing. The occurrence of redundancy affects the concepts of science/literature and carries with it sanctions of consequences. To define this issue is much challenging because of the many varieties in which one can slice, reformat, or reproduce material from an already published study. This issue also goes beyond the duplication of a single study because it might possible that the same or similar data can be published in the early, middle, and later stages of an on-going study. This may have a damaging impact on the scientific study/literature base. Similar to slicing a cake, there are so many ways of representing a study or a set of data/information. We can slice a cake into different shapes like squares, triangles, rounds, or layers. Which of these might be the best way to slice a cake? Unfortunately, this may be the wrong question. The point is that the cake that is being referred to, the data/ information set or the study/findings, should not be sliced at all. Instead, the study should be presented as a whole to the readership to ensure the integrity of science/technology because of the impact that may have on patients who will be affected by the information contained in the literature/findings. Redundant, duplicate, or repetitive publications occur when there is representation of two or more studies, data sets, or publications in either electronic or print media. The publications can overlap partially or completely, such that a similar portion, major component(s), or complete representation of a previously/simultaneous ly or future published study is duplicated.
SALAMI SLICING: The slicing of research publication that would form one meaningful paper into several different papers is known as salami publication or salami slicing. Unlike duplicate publication, which involves reporting the exact same data in two or more publications, salami slicing involves breaking up or segmenting a large study into two or more publications. These segments are called slices of a study. As a general rule, as long as the slices of a broken-up study share the same hypotheses, population, and methods, this is not acceptable in general practice. The same slice should never be published more than once at all. According to the United States Office of Research Integrity (USORI), salami slicing can result in a distortion of the literature/findings by leading unsuspecting readers to believe that data presented in each salami slice (journal article) is derived from a different subject sample/source. Somehow this practice not only skews the scientific database but it creates repetition to waste reader's time as well as the time of editors and peer reviewers, who must also handle each paper separately.
One of the most important research ethical issues that should be taken into consideration is “scientific misconduct” such as fabrication, falsification and plagiarism. Plagiarism can occur at any stage of the research activities such as reporting, communicating, authoring, and peer review. The purpose of this workshop is to engage researchers in their responsibility to conduct an ethical research.
Visit:www.acriindia.com
ACRI is a leading Clinical data management training Institute in Bangalore India.
ACRI creates a value add for every degree. Our PGDCRCDM course is approved by the Mysore University. Graduates and Post Graduates and even PhDs have trained with us and got enviable positions in the Clinical Research Industry. ACRI supplements University training with Industry based training, coupled with hands-on internships and projects based on real case studies. The ACRI brand gives the individual the confidence and expertise to join the ever-growing workforce both in the country and abroad.
Workshop Part 2: Publication Ethics for Biomedical Researchers (BioMed Centra...balaram_biomedcentral
The second presentation in the 2015 BioMed Central author workshop presented at institutions in Brazil.
In this segment, Dr. Maria Kowalczuk, Biology Editor, shares information on research ethics and publication ethics, drawing from her experience as a member of the BioMed Central Research Integrity Group.
The aim of this talk is to discusses some of the ethical issues that can arise during scientific publication and the peer review process and discusses their implications. The presentation covers several issue including the scientific publication ethics, misconduct, integrity of the research, authorship and peer review ethics as well as Committee on publication Ethics (COPE) ,
Bioprinting was defined as the use of material transfer processes for patterning and assembling biologically relevant materials- molecules, cells, tissues, and biodegradable biomaterials with a prescribed organization to accomplish one or more biological function. This is a developmental biology- inspired approach to tissue engineering and is based on the assumption that tissues and organs are self- organizing systems, and that cells and especially micro tissues can undergo biological self- assembly and self- organization without any external influence in the form of instructive, supporting and directing rigid templates or solid scaffolds.
Bioprinting or the biomedical application of rapid prototyping, also defined as layer- by- layer additive biomanufacturing, is an emerging transforming biomimetic technology that has potential for surpassing traditional solid scaffold- based tissue engineering. It is a rapid prototyping technology based on three dimensional, automated, computer-aided deposition of ‘‘bioink particles’’ (multicellular spheroids) into a ‘‘biopaper’’ (biocompatible gel; e.g. collagen) by a bioprinter
Clinical Research Statistics for Non-StatisticiansBrook White, PMP
Through real-world examples, this presentation teaches strategies for choosing appropriate outcome measures, methods for analysis and randomization, and sample sizes as well as tips for collecting the right data to answer your scientific questions.
Have full fleged clinical trial data management systems which bring them a good amount of business and revenue.
CDM is a fundamental process which controls data accuracy of each trial besides helping the timelessness to be achieved.
It helps in linking clinical research co-ordinator = who monitor all the sites & collects the data.
it Links with biostatisticians = who analyze, interpret and report data in clinically meaningful way.
It has been expleined in these slides that how 3D bioprinters work and some of them have been introdused. Also some examples of use 3D bioprinter in reality are introduced.
Finally feature of 3D bioprinters in human life has been explained.
Electronic Data Capture & Remote Data CaptureCRB Tech
CRB Tech is one of the best leading Software Development Company in Pune. We are offering Software Development Services as well as IT Training including Java, Dot Net, SEO and Clinical Research training in pune.
Writing the results section for scientific publicationAshok Pandey
To introduce participants to the details of communication and writing scientific papers.
To guide researchers in the writing of scientific paper to increase its acceptability for publication in a journal; and
To upgrade the pre-existing knowledge of writing skills in a scientific manner.
Presented at useR! 2014, July 2, 2014.
The R ecosystem is in a state of near constant change. While a new version of the R engine is now released just once a year, 2-3 patches are usually released in the interim. On top of that, new versions of R packages on CRAN are released at rate of several per day (and that’s not counting packages that are part of the BioConductor project or hosted elsewhere on the Web).
While this rapid change is a boon for the advancement of R, it can cause problems for package authors[1] and also for scientists and their peers who may need to reliably reproduce the results of an R script (possibly dependent on a number of packages) months or even years down the line. In this talk we propose a downstream distribution of CRAN packages that provides for the reproducibility of R scripts and reduces the impact of dependencies for packages authors.
Results Vary: The Pragmatics of Reproducibility and Research Object FrameworksCarole Goble
Keynote presentation at the iConference 2015, Newport Beach, Los Angeles, 26 March 2015.
Results Vary: The Pragmatics of Reproducibility and Research Object Frameworks
http://ischools.org/the-iconference/
BEWARE: presentation includes hidden slides AND in situ build animations - best viewed by downloading.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
RARE and FAIR Science: Reproducibility and Research ObjectsCarole Goble
Keynote at JISC Digifest 2015 on Reproducibility and Research Objects in Scholarly Communication
Includes hidden slides
All material except maybe the IT Crowd screengrab reusable
Keynote speech - Carole Goble - Jisc Digital Festival 2015Jisc
Carole Goble is a professor in the school of computer science at the University of Manchester.
In this keynote, Carole offered her insights into research data management and data centres.
Open Research Practices in the Age of a Papermill PandemicDorothy Bishop
Talk given to Open Research Group, Maynooth University, October 2022.
Describes the phenomenon of large-scale fraudulent science publishing (papermills), and discusses how open science practices can help tackle this.
Presentation to CRC Mental Health Early Career Researcher Workshop, Melbourne 29.11.17 for @andsdata.
Workshop title: A by-product of scientific training: We're all a little bit biased.
Why study Data Sharing? (+ why share your data)Heather Piwowar
A presentation to the DBMI department at the University of Pittsburgh about data sharing and reuse: what this means, why it is important, some of what we’ve learned, and what we still don’t know.
The Evolution of e-Research: Machines, Methods and MusicDavid De Roure
David De Roure's Inaugural Lecture on 28th October at Oxford e-Research Centre, University of Oxford, UK
10 years ago we saw a few early adopters of e-Science technology; now we see acceleration of research through broader adoption and sharing of tools, techniques and artefacts, both for 'big science' and the 'long tail scientist'.
Will this incremental trend continue or are we seeing glimpses of a phase change ahead, where researchers harness these emerging digital capabilities to address research questions in ways that simply were not possible before?
This talk will describe three generations of e-Research, using the myExperiment social website as a lens to glimpse future research practice, and focusing on a web-scale computational musicology project as an illustration of 3rd generation thinking.
Also available from http://wiki.myexperiment.org/index.php/Presentations
Lecture for a course at NTNU, 27th January 2021
CC-BY 4.0 Dag Endresen https://orcid.org/0000-0002-2352-5497
See also http://bit.ly/biodiversityinformatics
https://www.gbif.no/events/2021/lecture-ntnu-gbif.html
Recomendations for infrastructure and incentives for open science, presented to the Research Data Alliance 6th Plenary. Presenter: William Gunn, Director of Scholarly Communications for Mendeley.
The ELIXIR FAIR Knowledge Ecosystem for practical know-how: RDMkit and FAIRCo...Carole Goble
Presented at the FAIR Data in Practice Symposium, 16 may 2023 at BioITWorld Boston. https://www.bio-itworldexpo.com/fair-data. The ELIXIR European research Infrastructure for life science data is an inter-governmental organizations coordinating, integrating and sustaining FAIR data and software resources across its 23 nations. To help advise users, data stewards, project managers and service providers, ELIXIR has developed complementary community-driven, open knowledge resources for guiding FAIR Research Data Management (RDMkit) and providing FAIRification recipes (FAIRCookbook). 150+ people have contributed content so far, including representatives of the pharmaceutical industry.
Can’t Pay, Won’t Pay, Don’t Pay: Delivering open science, a Digital Research...Carole Goble
Invited talk, PHIL_OS, March 30-31 2023, Exeter
https://opensciencestudies.eu/whither-open-science. Includes hidden slides.
FAIR and Open Science needs Digital Research Infrastructure, which is a federated system of systems and needs funding models that are fit for purpose
Culture change needed for paying for Open Science’s infrastructure and funding support for data driven research needs more reality and less rhetoric
RO-Crate: packaging metadata love notes into FAIR Digital ObjectsCarole Goble
Abstract
slides available at: https://zenodo.org/record/7147703#.Y7agoxXP2F4
The Helmholtz Metadata Collaboration aims to make the research data [and software] produced by Helmholtz Centres FAIR for their own and the wider science community by means of metadata enrichment [1]. Why metadata enrichment and why FAIR? Because the whole scientific enterprise depends on a cycle of finding, exchanging, understanding, validating, reproducing), integrating and reusing research entities across a dispersed community of researchers.
Metadata is not just “a love note to the future” [2], it is a love note to today’s collaborators and peers. Moreover, a FAIR Commons must cater for the metadata of all the entities of research – data, software, workflows, protocols, instruments, geo-spatial locations, specimens, samples, people (well as traditional articles) – and their interconnectivity. That is a lot of metadata love notes to manage, bundle up and move around. Notes written in different languages at different times by different folks, produced and hosted by different platforms, yet referring to each other, and building an integrated picture of a multi-part and multi-party investigation. We need a crate!
RO-Crate [3] is an open, community-driven, and lightweight approach to packaging research entities along with their metadata in a machine-readable manner. Following key principles - “just enough” and “developer and legacy friendliness - RO-Crate simplifies the process of making research outputs FAIR while also enhancing research reproducibility and citability. As a self-describing and unbounded “metadata middleware” framework RO-Crate shows that a little bit of packaging goes a long way to realise the goals of FAIR Digital Objects (FDO)[4], and to not just overcome platform diversity but celebrate it while retaining investigation contextual integrity.
In this talk I will present the why, and how Research Object packaging eases Metadata Collaboration using examples in big data and mixed object exchange, mixed object archiving and publishing, mass citation, and reproducibility. Some examples come from the HMC, others from EOSC, USA and Australia, and from different disciplines.
Metadata is a love note to the future, RO-Crate is the delivery package.
[1] https://helmholtz-metadaten.de/en
[2] Scott, Jason The Metadata Mania, http://ascii.textfiles.com/archives/3181, June 2011
[3] Soiland-Reyes, Stian et al. “Packaging Research Artefacts with RO-Crate”. Data Science, 2022; 5(2):97-138, DOI: 10.3233/DS-210053
[4] De Smedt K, Koureas D, Wittenburg P. “FAIR Digital Objects for Science: From Data Pieces to Actionable Knowledge Units”. Publications. 2020; 8(2):21. https://doi.org/10.3390/publications8020021
Research Software Sustainability takes a VillageCarole Goble
The Research Software Alliance (ReSA) and the Netherlands eScience Center hosted a two-day international workshop to set the future agenda for national and international funders to support sustainable research software.
As the importance of software in research has become increasingly apparent, so has the urgent need to sustain it. Funders can play a crucial role in this respect by ensuring structural support. Over the past few years, a variety of methods for sustaining research software have been explored, including improving and extending funding policies and instruments. During the workshop, funding organizations joined forces to explore how they can effectively contribute to making research software sustainable.
This keynote helped frame the discussion from the perspective of community involvement in research software sustainability.
https://future-of-research-software.org/
this talk is available at Goble, Carole. (2022, November 8). Research Software Sustainability takes a Village. International funders workshop, The Future of Research Software, Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.7304596
“Bioscience has emerged as a data-rich discipline, in a transformation that is spreading as widely now as molecular biology in the twentieth century. We look forward to supporting new research careers, where data are valued and shared widely, where new software is a natural part of Biology, and where re-analysis and modelling are as creative as experimentation in understanding the rules of life and their applications.” Prof Andrew Millar FRS, chair Expert Group UKRI-BBSRC Review of data-intensive bioscience 2020.
Indeed - biomedical science is knowledge work and knowledge turning - the turning of observation and hypothesis through experimentation, comparison, and analysis into new, pooled knowledge. Turns depend on the FAIR and Open flow and availability of data and methods for automated processing and reproducible results, and on a society of scientists coordinating and collaborating.
For the past 25 years I have worked on the social and technical challenges in digital infrastructure to support scientific collaboration, data and method sharing, and automate scientific processing. Big ideas I have been instrumental in – sharing and publishing high quality computational workflows, semantic web technologies in bioscience, ecosystems of Research Objects as the currency of scholarly knowledge, FAIR data principles - preached revolution to inspire but need nudges* to get traction.
I’ll talk about making good on Andrew’s quote: what I’m doing to nudge and where we need to do more. I’ll also talk about my experiences as a woman in a digital infrastructure and computer science over the past 40 years – and some nudging is needed there too.
*Thaler RH, Sunstein CR (2008) Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press. ISBN 978-0-14-311526-7. OCLC 791403664.
https://www.bsc.es/research-and-development/research-seminars/hybrid-bsc-rslife-sessionbioinfo4women-seminar-love-money-fame-nudge-enabling-data-intensive
Open Research: Manchester leading and learningCarole Goble
Open and FAIR science has an international momentum. Large scale communities are striving to make and manage the digital infrastructure needed for scientists to be open as possible, closed as necessary, as expected by the NIH, OECD, UNESCO and the EC. ELIXIR is such a research infrastructure in Europe for Life Sciences. This talk will highlight two of ELIXIR's Open Science resources built by Open Science communities to enable life science researchers to be open, and led by Manchester. And how can we learn from these and bring these practices to Manchester?
Launch: Manchester Office for Open Research, 4th April 2022
https://www.openresearch.manchester.ac.uk/
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
presented at WORKS 2021
https://works-workshop.org/
16th Workshop on Workflows in Support of Large-Scale Science
November 15, 2021
Held in conjunction with SC21: The International Conference for High Performance Computing, Networking, Storage and Analysis
presentation at https://researchsoft.github.io/FAIReScience/, FAIReScience 2021 online workshop
virtually co-located with the 17th IEEE International Conference on eScience (eScience 2021)
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
FAIR Data Bridging from researcher data management to ELIXIR archives in the...Carole Goble
ISMB-ECCB 2021, NIH/ODSS Session, 27 July 2021
ELIXIR is the pan-national European Research Infrastructure for Life Science data, whose 23 national nodes and the EBI coordinate the development and long-term sustainability of domain public databases. FAIR services, policies and curation approaches aim to build a FAIR connected data ecosystem of trusted domain repositories, from ENA, HPA and EGA to specialised resources like CorkOakDB and PIPPA for plant phenotypes. But this is only one part of the data landscape and often the end of data’s journey. The nodes support research projects to operate “FAIR data first”, working with institutional and national platforms that are often generic or designed for project-based data management. We need to bridge between project-based and community-based, and support researchers across their whole RDM lifecycle, navigating the complexity this ecosystem. The ELIXIR-CONVERGE project and its flagship RDMkit toolkit (https://rdmkit.elixir-europe.org) aims to do just that.
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
FAIR Workflows and Research Objects get a Workout Carole Goble
So, you want to build a pan-national digital space for bioscience data and methods? That works with a bunch of pre-existing data repositories and processing platforms? So you can share FAIR workflows and move them between services? Package them up with data and other stuff (or just package up data for that matter)? How? WorkflowHub (https://workflowhub.eu) and RO-Crate Research Objects (https://www.researchobject.org/ro-crate) that’s how! A step towards FAIR Digital Objects gets a workout.
Presented at DataVerse Community Meeting 2021
FAIRy stories: the FAIR Data principles in theory and in practiceCarole Goble
https://ucsb.zoom.us/meeting/register/tZYod-ippz4pHtaJ0d3ERPIFy2QIvKqjwpXR
FAIRy stories: the FAIR Data principles in theory and in practice
The ‘FAIR Guiding Principles for scientific data management and stewardship’ [1] launched a global dialogue within research and policy communities and started a journey to wider accessibility and reusability of data and preparedness for automation-readiness (I am one of the army of authors). Over the past 5 years FAIR has become a movement, a mantra and a methodology for scientific research and increasingly in the commercial and public sector. FAIR is now part of NIH, European Commission and OECD policy. But just figuring out what the FAIR principles really mean and how we implement them has proved more challenging than one might have guessed. To quote the novelist Rick Riordan “Fairness does not mean everyone gets the same. Fairness means everyone gets what they need”.
As a data infrastructure wrangler I lead and participate in projects implementing forms of FAIR in pan-national European biomedical Research Infrastructures. We apply web-based industry-lead approaches like Schema.org; work with big pharma on specialised FAIRification pipelines for legacy data; promote FAIR by Design methodologies and platforms into the researcher lab; and expand the principles of FAIR beyond data to computational workflows and digital objects. Many use Linked Data approaches.
In this talk I’ll use some of these projects to shine some light on the FAIR movement. Spoiler alert: although there are technical issues, the greatest challenges are social. FAIR is a team sport. Knowledge Graphs play a role – not just as consumers of FAIR data but as active contributors. To paraphrase another novelist, “It is a truth universally acknowledged that a Knowledge Graph must be in want of FAIR data.”
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016). https://doi.org/10.1038/sdata.2016.18
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
The swings and roundabouts of a decade of fun and games with Research Objects Carole Goble
Research Objects and their instantiation as RO-Crate: motivation, explanation, examples, history and lessons, and opportunities for scholarly communications, delivered virtually to 17th Italian Research Conference on Digital Libraries
How are we Faring with FAIR? (and what FAIR is not)Carole Goble
Keynote presented at the workshop FAIRe Data Infrastructures, 15 October 2020
https://www.gmds.de/aktivitaeten/medizinische-informatik/projektgruppenseiten/faire-dateninfrastrukturen-fuer-die-biomedizinische-informatik/workshop-2020/
Remarkably it was only in 2016 that the ‘FAIR Guiding Principles for scientific data management and stewardship’ appeared in Scientific Data. The paper was intended to launch a dialogue within the research and policy communities: to start a journey to wider accessibility and reusability of data and prepare for automation-readiness by supporting findability, accessibility, interoperability and reusability for machines. Many of the authors (including myself) came from biomedical and associated communities. The paper succeeded in its aim, at least at the policy, enterprise and professional data infrastructure level. Whether FAIR has impacted the researcher at the bench or bedside is open to doubt. It certainly inspired a great deal of activity, many projects, a lot of positioning of interests and raised awareness. COVID has injected impetus and urgency to the FAIR cause (good) and also highlighted its politicisation (not so good).
In this talk I’ll make some personal reflections on how we are faring with FAIR: as one of the original principles authors; as a participant in many current FAIR initiatives (particularly in the biomedical sector and for research objects other than data) and as a veteran of FAIR before we had the principles.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Reproducibility and Scientific Research: why, what, where, when, who, how
1. Reproducibility
and Scientific
Research
Professor Carole Goble CBE FREng FBCS
The University of Manchester, UK
carole.goble@manchester.ac.uk
Open Data Manchester, 27th January 2015
icanhascheezburger.com
why, what, where, when, who, how
2. Scientific publications have at least
two goals:
(i) to announce a result and
(ii) to convince readers that the
result is correct
…..
papers in experimental [and
computational science] should
describe the results and provide a
clear enough protocol [or
algorithm] to allow successful
repetition and extension
Jill Mesirov
Accessible Reproducible Research
Science 22 January 2010:
Vol. 327 no. 5964 pp. 415-416
DOI: 10.1126/science.1179653
Virtual Witnessing / Minute Taking
7. “an experiment is reproducible until
another laboratory tries to repeat it.”
Alexander Kohn
8.
9. design
cherry picking data, random seed
reporting, non-independent bias, poor
positive and negative controls, dodgy
normalisation, arbitrary cut-offs,
premature data triage, un-validated
materials, improper statistical analysis,
poor statistical power, stop when “get to
the right answer”, software
misconfigurations misapplied black box
software
reporting
John P. A. Ioannidis Why Most Published Research FindingsAre False, August 30, 2005,
DOI: 10.1371/journal.pmed.0020124
incomplete reporting of software configurations, parameters & resource
versions, missed steps, missing data, vague methods, missing software
Joppa, et al,TroublingTrends in Scientific Software Use SCIENCE 340 May 2013
Empirical
Statistical
Computational
V. Stodden, IMS Bulletin (2013)
10.
11. Transparency / Availability Gap
1. Ioannidis et al., 2009. Repeatability of published microarray gene expression analyses. Nature Genetics 41: 14
2. Science publishing: The trouble with retractions http://www.nature.com/news/2011/111005/full/478026a.html
3. Bjorn Brembs: Open Access and the looming crisis in science https://theconversation.com/open-access-and-the-looming-crisis-in-science-14950
Out of 18 microarray papers, results
from 10 could not be reproduced
13. Broken software, broken science
• GeoffreyChang, Scripps Institute
• Homemade data-analysis program
inherited from another lab
• Flipped two columns of data,
inverting the electron-density map
used to derive protein structure
• Retract 3 Science papers and 2 papers
in other journals
• One paper cited by 364The structures of MsbA (purple) and
Sav1866 (green) overlap little (left)
until MsbA is inverted (right).
Miller A Scientist's Nightmare: Software Problem Leads to Five Retractions Science 22 December 2006:
vol. 314 no. 5807 1856-1857
http://www.software.ac.uk/blog/2014-12-04-its-impossible-conduct-research-without-software-say-7-out-10-uk-researchers
14. “An article about computational science in a
scientific publication is not the scholarship
itself, it is merely advertising of the
scholarship.The actual scholarship is the
complete software development
environment, [the complete data] and the
complete set of instructions which generated
the figures.”
David Donoho, “Wavelab and Reproducible
Research,” 1995
Morin et al Shining Light into Black Boxes Science 13 April 2012: 336(6078) 159-160
Ince et alThe case for open computer programs, Nature 482, 2012
algorithms
configurations
tools and apps
codes
workflows
scripts
code libraries
third party services,
system software
infrastructure,
compilers
hardware
Self-contained codes??
15.
16. WHY? 12+3 reasons
research goes “wrong”
1. Pressure to publish
2. Impact factor mania
3. Tainted resources
4. Bad maths
5. Sins of omission
6. Science is messy
7. Broken peer review
8. Some scientists don’t share
9. Research never reported
10. Poor training -> sloppiness
11. Honest error
12. Fraud
13. Disorganisation & time pressures
14. Cost to prepare and curate materials
15. Inherently “unreplicable ” (one-off data, specialist kit, stochastic)
https://www.sciencenews.org/article/12-reasons-research-goes-wrong (adapted)
17. • replication hostility
• resource intensive
• no funding, time,
recognition, place to
publish
• the complete
environment?
Its HARD to
Prepare and Independently Test
[Norman Morrison]
20. Can I repeat
my method?
publish article
DEFEND
*Adapted from Mesirov, J. Accessible Reproducible Research Science 327(5964), 415-416 (2010)
WHEN? same experiment, set up, lab
submit article
(and move on…)
Can I replicate
your method?
CERTIFY
(a window before decay sets in … )
same experiment,
set up,
independent lab
Can I reproducemy
results using your method or
your results using my method?
COMPARE
variations on experiment, set up, lab
Can I reuseyour
results / method in my
research ?
TRANSFER
different experiment
21. WHO? scientific ego-system & access
trust, reciprocity, and competition
blame
scooping
no credit / credit drift
misinterpretation
scrutiny trolling
cost of preparation
support distraction
dependents on old news
loss of dowry
loss of special sauce
hugging
flirting
voyerism
cautionary creeping
Tenopir, et al. Data Sharing by Scientists: Practices and Perceptions. PLoS ONE 6(6) 2012
Borgman The conundrum of sharing research data, JASIST 2012
22. John P. A. Ioannidis How to Make More Published ResearchTrue, October 21, 2014 DOI: 10.1371/journal.pmed.1001747
Sandve GK, NekrutenkoA,Taylor J, Hovig E (2013)Ten Simple Rules for Reproducible Computational Research. PLoS
Comput Biol 9(10): e1003285. doi:10.1371/journal.pcbi.1003285
HOW?
27. Summary
• Replicable Science is
hard work and poorly
rewarded
• Reproducible Science
=> Transparent Science
but ideally needs to be
born that way
• Collective responsibility
28. • Barend Mons
• Sean Bechhofer
• Philip Bourne
• Matthew Gamble
• Raul Palma
• Jun Zhao
• Alan Williams
• Stian Soiland-Reyes
• Paul Groth
• Tim Clark
• Juliana Freire
• Alejandra Gonzalez-Beltran
• Philippe Rocca-Serra
• Ian Cottam
• Susanna Sansone
• Kristian Garza
• Hylke Koers
• Norman Morrison
• Ian Fore
• Jill Mesirov
• Robert Stevens
• Steve Pettifer
http://www.researchobject.org
http://www.wf4ever-project.org
http://www.fair-dom.org
http://www.software.ac.uk