PhD Theses are normally locked away digitally. They cost 20 billion dollars to create and we waste much of this value. By making them open we can use software to read, index, reuse, compute and add massive value
This document discusses the Content Mine project, which aims to extract factual information from scientific literature using automated processes. Some key points:
1) ContentMine will extract 100 million facts per year from scientific papers by crawling, scraping, extracting, and republishing the data. The extracted data will be made openly available under open licenses and standards.
2) The goal is to make the vast amount of data locked in scientific papers more accessible and useful by converting it to structured, semantic formats like CSV and applying techniques like computer vision and natural language processing.
3) This will help address issues like an estimated 85% of medical research being wasted due to problems like poor data sharing and availability. Extracting facts at scale
Jean-Claude Bradley was a pioneer of doing Open Science and on 2014-07-14 we held a memorial meeting in Cambridge (see also http://inmemoriamjcb.wikispaces.com/Jean-Claude+Bradley+Memorial+Symposium)
contentmine.org (funded by Shuttleworth Foundation) has developed tools and workshops to allow anyone to mine scientific content. This 10-minute presentation at Wellcome Trust encourages you to become involved - no previous knowledge required.
This document discusses open data and open science. It highlights Jean-Claude Bradley as a pioneer of open notebook science and open data who believed closed data means people die. It describes tools like ContentMine that can automatically extract data like chemical reactions, phylogenetic trees and clinical trial results from papers. Visitors can extract specific types of data while repositories can solve problems communally with continuous publication and validation.
PhD Theses are normally locked away digitally. They cost 20 billion dollars to create and we waste much of this value. By making them open we can use software to read, index, reuse, compute and add massive value
This document discusses the Content Mine project, which aims to extract factual information from scientific literature using automated processes. Some key points:
1) ContentMine will extract 100 million facts per year from scientific papers by crawling, scraping, extracting, and republishing the data. The extracted data will be made openly available under open licenses and standards.
2) The goal is to make the vast amount of data locked in scientific papers more accessible and useful by converting it to structured, semantic formats like CSV and applying techniques like computer vision and natural language processing.
3) This will help address issues like an estimated 85% of medical research being wasted due to problems like poor data sharing and availability. Extracting facts at scale
Jean-Claude Bradley was a pioneer of doing Open Science and on 2014-07-14 we held a memorial meeting in Cambridge (see also http://inmemoriamjcb.wikispaces.com/Jean-Claude+Bradley+Memorial+Symposium)
contentmine.org (funded by Shuttleworth Foundation) has developed tools and workshops to allow anyone to mine scientific content. This 10-minute presentation at Wellcome Trust encourages you to become involved - no previous knowledge required.
This document discusses open data and open science. It highlights Jean-Claude Bradley as a pioneer of open notebook science and open data who believed closed data means people die. It describes tools like ContentMine that can automatically extract data like chemical reactions, phylogenetic trees and clinical trial results from papers. Visitors can extract specific types of data while repositories can solve problems communally with continuous publication and validation.
This document discusses disruptive communities and technologies in science. It advocates for open access to scientific literature and data to accelerate research. Peter Murray-Rust notes that closed access to publications and data can literally cause harm, as important information is kept from scientists and the public. He promotes the ideas of open access pioneers like Jean-Claude Bradley, who developed open notebook science to openly share the records of research projects. Murray-Rust argues that restricting access to scientific works and data through licensing destroys the ability to freely mine and analyze content, and believes the right to read scientific works should enable open mining of information.
Can Computers understand the scientific literature (includes compscie material)petermurrayrust
This document discusses enabling machines to understand scientific literature through semantic markup and content mining. It describes how semantic tagging, natural language processing, and computer vision can extract structured information like chemical structures, reactions, and spectral data from papers. Content mining at large scale could liberate supplemental data and integrate it with open resources, enabling new applications of scientific data. The document advocates for fully open licensing of research data and outputs to enable such automated understanding and reuse by both humans and machines.
Automatic Extraction of Knowledge from the LiteratureTheContentMine
Published on May 11, 2016 by PMR
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
Automatic Extraction of Knowledge from the Literaturepetermurrayrust
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
This document summarizes Peter Murray-Rust's speech advocating for open access, open data, and open educational resources. It discusses the history of movements fighting for openness, such as the Free Speech Movement and opposition to SOPA. It highlights individuals and organizations working to make scholarly communications more open and equitable. The speech argues that closed access to research and data hinders progress, and calls the audience to continue efforts to reform publishing practices and build an open scholarly ecosystem through community organizing, software development, and training.
Talk to EBI Industry group on Open Software for chemical and pharmaceutical sciences. Covers examples of chemistry , wit demos, and argues that all public knowledge should be Openly accessible
Published on Jan 29, 2016 by PMR
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuous Integration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of content mining (TDM)
This document summarizes Peter Murray-Rust's presentation on open content and programs at MIOSS 2016. It discusses how open approaches can lead to faster, better, and more inclusive research. It provides examples of open source tools for tasks like chemical entity recognition, structure identification, and chemical language parsing. It also describes efforts to openly mine scientific literature to extract facts and analyze large amounts of data.
Liberating facts from the scientific literature - Jisc Digifest 2016 TheContentMine
Published on Mar 4, 2016 by PMR
Text and data mining (TDM) techniques can be applied to a wide range of materials, from published research papers, books and theses, to cultural heritage materials, digitised collections, administrative and management reports and documentation, etc. Use cases include academic research, resource discovery and business intelligence.
This workshop will show the value and benefits of TDM techniques and demonstrate how ContentMine aims to liberate 100,000,000 facts from the scientific literature, and ContentMine will provide a hands on demo on a topical and accessible scientific/medical subject.
Automatic Extraction of Knowledge from Biomedical literaturepetermurrayrust
a plenary lecture to Cochrane Collaboration in Birmingham, on the value of automatically extracting knowledge. Covers the Why? How? What? Who? and problems and invites collaboration
Copyright is one of the greatest barrier to Open Data. This presentation for insidegovernment UK shows the struggle between those who want to reform copyright and those opposed to reform
Specimen-level mining: bringing knowledge back 'home' to the Natural History ...Ross Mounce
A talk given at the Geological Society of London, UK on 2016/03/09 as part of the Lyell meeting on Palaeoinformatics. http://www.geolsoc.org.uk/lyell16 #lyell16
Amanuens.is HUmans and machines annotating scholarly literaturepetermurrayrust
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
The document discusses the opportunities that open science provides and the challenges of implementing open science practices. It argues that open science allows data and research to have a longer lifespan by enabling others to build upon it. While making data and methods openly available requires extra effort, databases and tools are helping to make open science practices more practical. The document urges researchers to look beyond just preventing bad practices and to focus on the potential benefits of open science, such as maximizing the usefulness of research and stimulating new discoveries.
Automatic Extraction of Knowledge from Biomedical literature TheContentMine
Published on Mar 16, 2016 by PMR
A plenary lecture to Cochrane Collaboration in Birmingham, on the value of automatically extracting knowledge. Covers the Why? How? What? Who? and problems and invites collaboration
High throughput mining of the scholarly literature TheContentMine
Published on Jun 7, 2016 by PMR
Talk given to statisticians in Tilburg, with emphasis on scholarly comms for detecting unusual features. Includes demo of Amanuens.is and image mining
Amanuens.is HUmans and machines annotating scholarly literature TheContentMine
Published on May 19, 2016 by PMR
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
This document discusses disruptive communities and technologies in science. It advocates for open access to scientific literature and data to accelerate research. Peter Murray-Rust notes that closed access to publications and data can literally cause harm, as important information is kept from scientists and the public. He promotes the ideas of open access pioneers like Jean-Claude Bradley, who developed open notebook science to openly share the records of research projects. Murray-Rust argues that restricting access to scientific works and data through licensing destroys the ability to freely mine and analyze content, and believes the right to read scientific works should enable open mining of information.
Can Computers understand the scientific literature (includes compscie material)petermurrayrust
This document discusses enabling machines to understand scientific literature through semantic markup and content mining. It describes how semantic tagging, natural language processing, and computer vision can extract structured information like chemical structures, reactions, and spectral data from papers. Content mining at large scale could liberate supplemental data and integrate it with open resources, enabling new applications of scientific data. The document advocates for fully open licensing of research data and outputs to enable such automated understanding and reuse by both humans and machines.
Automatic Extraction of Knowledge from the LiteratureTheContentMine
Published on May 11, 2016 by PMR
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
Automatic Extraction of Knowledge from the Literaturepetermurrayrust
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
This document summarizes Peter Murray-Rust's speech advocating for open access, open data, and open educational resources. It discusses the history of movements fighting for openness, such as the Free Speech Movement and opposition to SOPA. It highlights individuals and organizations working to make scholarly communications more open and equitable. The speech argues that closed access to research and data hinders progress, and calls the audience to continue efforts to reform publishing practices and build an open scholarly ecosystem through community organizing, software development, and training.
Talk to EBI Industry group on Open Software for chemical and pharmaceutical sciences. Covers examples of chemistry , wit demos, and argues that all public knowledge should be Openly accessible
Published on Jan 29, 2016 by PMR
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuous Integration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of content mining (TDM)
This document summarizes Peter Murray-Rust's presentation on open content and programs at MIOSS 2016. It discusses how open approaches can lead to faster, better, and more inclusive research. It provides examples of open source tools for tasks like chemical entity recognition, structure identification, and chemical language parsing. It also describes efforts to openly mine scientific literature to extract facts and analyze large amounts of data.
Liberating facts from the scientific literature - Jisc Digifest 2016 TheContentMine
Published on Mar 4, 2016 by PMR
Text and data mining (TDM) techniques can be applied to a wide range of materials, from published research papers, books and theses, to cultural heritage materials, digitised collections, administrative and management reports and documentation, etc. Use cases include academic research, resource discovery and business intelligence.
This workshop will show the value and benefits of TDM techniques and demonstrate how ContentMine aims to liberate 100,000,000 facts from the scientific literature, and ContentMine will provide a hands on demo on a topical and accessible scientific/medical subject.
Automatic Extraction of Knowledge from Biomedical literaturepetermurrayrust
a plenary lecture to Cochrane Collaboration in Birmingham, on the value of automatically extracting knowledge. Covers the Why? How? What? Who? and problems and invites collaboration
Copyright is one of the greatest barrier to Open Data. This presentation for insidegovernment UK shows the struggle between those who want to reform copyright and those opposed to reform
Specimen-level mining: bringing knowledge back 'home' to the Natural History ...Ross Mounce
A talk given at the Geological Society of London, UK on 2016/03/09 as part of the Lyell meeting on Palaeoinformatics. http://www.geolsoc.org.uk/lyell16 #lyell16
Amanuens.is HUmans and machines annotating scholarly literaturepetermurrayrust
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
The document discusses the opportunities that open science provides and the challenges of implementing open science practices. It argues that open science allows data and research to have a longer lifespan by enabling others to build upon it. While making data and methods openly available requires extra effort, databases and tools are helping to make open science practices more practical. The document urges researchers to look beyond just preventing bad practices and to focus on the potential benefits of open science, such as maximizing the usefulness of research and stimulating new discoveries.
Automatic Extraction of Knowledge from Biomedical literature TheContentMine
Published on Mar 16, 2016 by PMR
A plenary lecture to Cochrane Collaboration in Birmingham, on the value of automatically extracting knowledge. Covers the Why? How? What? Who? and problems and invites collaboration
High throughput mining of the scholarly literature TheContentMine
Published on Jun 7, 2016 by PMR
Talk given to statisticians in Tilburg, with emphasis on scholarly comms for detecting unusual features. Includes demo of Amanuens.is and image mining
Amanuens.is HUmans and machines annotating scholarly literature TheContentMine
Published on May 19, 2016 by PMR
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
Published on Aug 22, 2014 by PMR
Open Data and Open Science presented in Rio for Open Science 2014-08-22. I argue that Open Notebook Science is the way forward and will lead to great benefits
Digital Scholarship: Enlightenment or Devastated Landscape? TheContentMine
Published on Dec 17, 2015 by PMR
Every year 500 Billion USD of public funding is spent on research, but much of this lies hidden in papers that are never read. I describe how machines can help us to read the literature. However there is massive opposition from publishers who are trying to prevent open scholarship and who build walled gardens that they control
Jean-Claude Bradley's vision of open notebook science aims to change how science is done by making all research processes and outputs openly accessible in real-time. This addresses significant waste in the current system where most research funds are wasted due to flawed design, non-publication and poor reporting of results. Open science is inspired by open source software which engineers repositories for transparent and collaborative development, validation and improvement by global communities. The combination of openness, truth and community can optimize scientific processes and outputs.
Published on Jul 24, 2014 by PMR
PhD Theses are normally locked away digitally. They cost 20 billion dollars to create and we waste much of this value. By making them open we can use software to read, index, reuse, compute and add massive value
This document summarizes Peter Murray-Rust's work on mining scientific images and extracting structured information from them. It discusses his software tools for image analysis, including recognizing figures, tables, chemical structures, phylogenetic trees and extracting data. It provides examples of extracting information from images and transforming unstructured data in PDFs into semantic, computable formats. The document also discusses opportunities for collaboration and making the software openly available.
Published on Feb 29, 2016 by PMR
An overview of Text and Data Mining (ContentMining) including live demonstrations. The fundamentals: discover, scrape, normalize , facet/index, analyze, publish are exemplified using the recent Zika outbreak. Mining covers textual and non-textual content and examples of chemistry and phylogenetic tress are given.
Can Computers understand the scientific literature (includes compscie material)TheContentMine
Published on Jan 24, 2014 by PMR
With the semantic web machines can autonomously carry out many knowledge-based tasks as well as humans. The main problems are not technical but the prevention of access to information. I advocate automatic downloading and indexing of all scientific information
Published on Feb 07, 2016 by PMR
Use of ContentMine tools on the Open Access subset of EuropePubMedCentral to discover new knowledge about the Zika virus. Includes clips of the software in action
ContentMine: Open Data and Social MachinesTheContentMine
Published on Nov 13, 2014 by PMR
Scientific information is often hidden or not published properly. The ContentMine is a Social Machine consisting of semantic software and communities of domain expertise; it aims to liberate all scientific facts from the published literature on a daily basis.
The talk , delivered to the Computational Institute, will be /was followed by a hands-on workshop learning how to use the technology and work as a community.
Scientific information is often hidden or not published properly. The ContentMine is a Social Machine consisting of semantic software and communities of domain expertise; it aims to liberate all scientific facts from the published literature on a daily basis.
The talk , delivered to the Computational Institute, will be /was followed by a hands-on workshop learning how to use the technology and work as a community.
Automatic Extraction of Science and Medicine from the scholarly literatureTheContentMine
Published on Jun 04, 2015 by PMR
Many scientists have to extract many facts out the scholarly literature - to evaluate other work or to extract useful collections of facts. This shows the approach, especially for systematic reviews of animal or clinical trials
Automatic Extraction of Science and Medicine from the scholarly literaturepetermurrayrust
Many scientists have to extract many facts out the scholarly literature - to evaluate other work or to extract useful collections of facts. This shows the approach, especially for systematic reviews of animal or clinical trials
Basics of ContentMining presented to Synthetic Biologists. This was followed by a lively discussion of what components could be extracted from the literature
Published on May 18, 2015 by PMR
Basics of ContentMining presented to Synthetic Biologists. This was followed by a lively discussion of what components could be extracted from the literature
This document summarizes a presentation on using open-source tools to provide access to scientific literature on climate change and migration. It describes how ContentMine has built tools called "Open Climate Knowledge" to mine scientific articles on climate change from publishers' websites and other open sources. However, most of this literature (50-90%) is currently behind paywalls. The tools allow querying across open-access sources to provide summaries of available literature on topics like the relationship between climate change and human migration. Examples of results from initial queries on this topic are also provided.
This document summarizes a workshop on content mining clinical trials literature. The workshop included:
- Discussions on why content mining clinical trials could be useful and current tools/needs.
- A demonstration of software for hands-on content mining and extracting specific trial information.
- Technical sessions where participants worked in groups to extract sections of clinical trial reports using content mining software and techniques.
This document provides a summary of a presentation on open scientific knowledge and building a knowledgebase beyond traditional journals. The presentation discusses the problems with publishers controlling infrastructure and restricting access to knowledge. It demonstrates software tools like getpapers and AMI that can be used to freely access and search across scientific literature. The presentation advocates for open access to all scientific literature and building a sustainable community and organization to achieve this goal.
Open Data and Open Science presented in Rio for Open Science 2014-08-22. I argue that Open Notebook Science is the way forward and will lead to great benefits
Paradise Lost and The Right to Read is the Right to Minepetermurrayrust
Presented to UIUC CIRSS seminars to a mixed group of Library, CS, domain scientists with a great contingent of Early Career Researchers. Starts by honouring the creation of the wonderful NCSA Mosaic at UIUC in 1993 and the paradise of knowledge and community it opened. Then shows the gradual and tragic decline of the web into a megacorporate neocolonialist empire, where knowledge is sacrificed for money and power.
You have seen many of the slides before but the words are different and have been recorded.
The scientific and medical literature is a vast resource of knowledge, but it needs turning into semantic FAIR form. The ContentMine can do this and we presented a rapid overview of the potential
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuousIntegration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of contentmining (TDM)
Automatic mining of data from materials science literaturepetermurrayrust
The literature on materials science (batteries, etc.) contains huge amounts of scientific facts, but not in easily accessible form. our AMI program has been developed to automatically:
scrape , clean, annotate and display/publish
data for re-use in science.
Examples will be given from electrochemistry, magnetism and other fields . The general principles and (open) tech are applicable to many other disciplines.
Open Knowledge and University of Cambridge European Bioinformatics InstituteTheContentMine
This document discusses open data and open science. It highlights Jean-Claude Bradley as a pioneer of open notebook science and open data who believed closed data means people die. It describes tools like ContentMine that can automatically extract data like chemical reactions, phylogenetic trees and clinical trial results from papers. Visitors can extract specific types of data while repositories can solve problems communally with continuous publication and validation.
The Culture of Research Data, by Peter Murray-RustLEARN Project
1st LEARN Workshop. Embedding Research Data as part of the research cycle. 29 Jan 2016. Presentation by Peter Murray-Rust, ContentMine.org and University of Cambridge
Published on Nov 26, 2014 by PMR
Followup meeting in London to OpenCon2014, on the need for different models of scholarly communication. I explore the history of 20thC academic student-based revolutions, with special relevance to young people and the scope for action today.
Published on Mar 05, 2015 by PMR
contentmine.org (funded by Shuttleworth Foundation) has developed tools and workshops to allow anyone to mine scientific content. This 10-minute presentation at Wellcome Trust encourages you to become involved - no previous knowledge required.
Published on Dec 01, 2014 by PMR
An overview of ContentMining for JISC (the infrastructure provider of UK academia). Examples, details leading to hands-on exercise (http://contentmine.org/workflow
Published on Mar 19, 2015 by PMR
Copyright is one of the greatest barrier to Open Data. This presentation for insidegovernment UK shows the struggle between those who want to reform copyright and those opposed to reform
ContentMine: Liberating scholarship from Open publications and thesesTheContentMine
Published on Apr 21, 2015 by PMR
Theses represent a huge amount of untapped value. We show how contentmine.org technology can be used to mine them and extract knowledge
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
2. ContentMine: We use machines to
liberate 100 million facts /yr from the
scientific literature and make them free
for everyone (WikiData)
With Wikipedia we are ALL scientists
ContentMine is a social machine
WikiData is the future of science data
3. http://en.wikipedia.org/wiki/Tim_Berners-Lee
Everything in this presentation is ODOSOS
(Open Data, Open Standards, Open Source)
CC0, CC-BY, W3C etc., Apache2, etc. *
http://contentmine.org
http://bitbucket.org/petermr
http://wwmm.ch.cam.ac.uk
*Sorry about the Powerpoint (Power corrupts, Powerpoint corrupts absolutely (Tufte))
A promise: I (Petermr) will never sell out to non-transparent organizations.
4. petermr: I believe in Wikipedia
• 2006 http://en.wikipedia.org/wiki/User:Petermr
• 2006 started Open Data (term unknown then!)
• 2009: “the bit of Wikipedia that I wrote is correct” [challenging the
idea of “WP is junk”]
• 2009: “Wikipedia is the digital library of this century”
• 2012: I alert WP that Springer has copyrighted > 1000 of our
images [Springergate]
• 2014: “For facts in maths, physical and biological sciences I trust
Wikipedia.” (Wikimania2014)
7. Scientific and Medical publication (STM)[+]
• World Citizens pay $400,000,000,000…
• … for research in 1,500,000 articles …
• … cost $300,000 each to create …
• … $7000 each to “publish” [*]…
• … $10,000,000,000 from academic libraries …
• … to “publishers” who forbid access to 99.9% of
citizens of the world …
[+] Figures probably +- 50 %
[*] arXiV preprint server costs $7 USD per paper
8. 4 Billion USD on human genome
yielded 800 Billion USD and 4 M job-years
10. …three problems—flawed design, non-
publication, and poor reporting—together
meant >85% of research funds were wasted, a
global total loss >100 billion USD per year.
[Lancet 2009]
[Even more] waste clearly occurs after
publication: from poor access, poor
dissemination, and poor uptake of the findings
of research. [PLOS Medicine 2014-05-27]
Bad publication wastes science
11. Publishers’ PDFs destroy science
PDFs do not contain words
or subscripts!
PDFs do not contain tables
and do not have columns
SVG is turned into JPEG because it’s easier to process
13. STM Publishers Licence
2012_03_15_Sample_Licence_Text_Data_Mining.pdf
(Summary: PMR has NO rights)
• [cannot publish to: ] “libraries, repositories, or archives”
• [cannot] “Make the results of any TDM Output available on an externally facing server or
website”
• “Subscriber shall pay a […] fee”
Heather Piwowar: “negotiating with publishers [made me physically ill]”
WE WALKED OUT
• Brit Library
• JISC
• RLUK
• OKFN
• …
• Ross Mounce
• PM-R
Licences destroy Content Mining
16. http://www.budapestopenaccessinitiative.org/read
… an unprecedented public good. …
… completely free and unrestricted access to [peer-
reviewed literature] by all scientists, scholars, teachers,
students, and other curious minds. …
…Removing access barriers to this literature will
accelerate research, enrich education, share the
learning of the rich with the poor and the poor with
the rich, make this literature as useful as it can be, and
lay the foundation for uniting humanity in a common
intellectual conversation and quest for knowledge.
(Budapest Open Access Initiative, 2003)
17. The Right to Read is the Right to Mine
http://contentmine.org
18. • Science can be read and understood by
human-machine Amanuensis-symbionts.
• Amanuenses are based on Wikipedia,
databases and software (e.g. ContentMine’s
AMI)
• The results are fed back into WP and WikiData
http://en.wikipedia.org/wiki/Symbiosishttp://en.wikipedia.org/wiki/Eric_Fenby
19. • Crawl scientific literature
(Open Bibliography)
• Scrape each scientific article
(ContentMine-quickscrape)
• Extract the facts (ContentMine-AMI)
• Index (Wikipedia)
• Republish (WikiData)
Machine Extraction of scientific facts
25. Open Content Mining of FACTs
Machines can interpret chemical reactions
We have done 500,000 patents. There are >
3,000,000 reactions/year. Added value > 1B Eur.
26. RSU: Richard Smith-Unna
PMR: Peter Murray-Rust
CL: CottageLabs
Queues
Repos
Scientific
literature
Science
Plugins
Science
Volunteers
27. But we can now
turn PDFs into
Science
We can’t turn a hamburger into a cow
30. Bacterial WP_phylogenetic tree
Our machines have read and interpreted 4300 in an hour with > 95% accuracy
Trees From http://ijs.sgmjournals.org/ used under new UK legislation (Hargreaves)
WP: Clostridium_butyricum
Genbank ID
American Type
Culture Collection
31. (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0036933 –
“Adaptive Evolution of HIV at HLA Epitopes Is Associated with Ethnicity in Canada” .
((n122,((n121,n205),((n39,(n84,((((n35,n98),n191),n22),n17))),((n10,n182),(
(((n232,n76),n68),(n109,n30)),(n73,(n106,n58))))))),((((((n103,n86),(n218,(n
215,n157))),((n164,n143),((n190,((n108,n177),(n192,n220))),((n233,n187),
n41)))),((((n59,n184),((n134,n200),(n137,(n212,((n92,n209),n29))))),(n88,(n
102,n161))),((((n70,n140),(n18,n188)),(n49,((n123,n132),(n219,n198)))),(((
n37,(n65,n46)),(n135,(n11,(n113,n142)))),(n210,((n69,(n216,n36)),(n231,n1
60))))))),(((n107,n43),((n149,n199),n74)),(((n101,(n19,n54)),n96),(n7,((n139
,n5),((n170,(n25,n75)),(n146,(n154,(n194,(((n14,n116),n112),(n126,n222)))
)))))))),(((((n165,(n168,n128)),n129),((n114,n181),(n48,n118))),((n158,(n91,(
n33,n213))),(n87,n235))),((n197,(n175,n117)),(n196,((n171,(n163,n227)),((
n53,n131),n159)))))));
http://en.wikipedia.org/wiki/Digital_image_processing
http://en.wikipedia.org/wiki/Newick_format http://en.wikipedia.org/wiki/Phylogenetics
32. Open notebook science is the practice of
making the entire primary record of a research
project publicly available online as it is
recorded. (WP)
Jean-Claude Bradley was a chemist who
actively promoted Open Science in
chemistry,… He coined the term Open
Notebook Science. … A memorial
symposium was held July 14, 2014 at
Cambridge University, UK.[9]
33. RSU: Richard Smith-Unna
PMR: Peter Murray-Rust
CL: CottageLabs
Queues
Repos
Scientific
literature
Science
Plugins
Science
Volunteers
34. My Wikiwishes
• An Open Bibliography of science, updated
daily
• An interface for ContentMine to feed new
facts into WikiData
• Domain-specific enthusiasts to create and run
fact extraction and validation
• Wikipedia to become a C21 publisher of
science
35. Thanks
• Shuttleworth Foundation and Fellowship
• Contentmine.org: Michelle Brook, Jenny Molloy,
Ross Mounce, Richard Smith-Unna,
CottageLabs, Charles Oppenheim
• Open Knowledge Foundation Community
• Wikimedia Community
• Blue Obelisk Community