This document discusses the evolution of research from versions 1.0 to 3.0. Research 1.0 involved universities doing research and communicating findings through publications. Research 2.0 saw more funding from governments and industry, greater specialization, and unintended consequences like rewards prioritizing publications over reproducibility. Research 3.0, still emerging, may see all research objects and workflows captured digitally and shared openly online, allowing greater reuse and new applications of data. It questions who will do and fund research in the future, and how libraries may adapt to remain important in scholarly communications.
Opening remarks and framing for "Research 3.0: Accelerating Discovery in a Digital Universe" at the 2012 joint meeting of the AAMC GREAT and GRAND groups.
See also the plenary session talks by Anita de Waard, John Wilbanks and Cameron Neylon. All on slideshare.net.
University of Miami's Clinical and Translational Sciences Institute (CTSI) runs a bootcamp course for medical residents & fellows and postdocs. This is my 2012 version of "Questions for knowledge creators" lecture.
Intro to Big Data session, AAMC GREAT/GRAND Meeting, 2014Richard Bookman
Introductory remarks for the Big Data in Biomedical Research and Training session at the annual meeting of the AAMC GREAT & GRAND groups. Fort Worth, Texas, 9/19/2014
Univ of Miami CTSI: Citizen science seminar; Oct 2014Richard Bookman
The University of Miami's Clinical & Translational Science Institute runs a seminar course for MS students.
This talk surveys 8 citizen science projects, reviews NIH's current activities, and identifies issues for attention, particularly with ethical, legal and social implications.
Opening remarks and framing for "Research 3.0: Accelerating Discovery in a Digital Universe" at the 2012 joint meeting of the AAMC GREAT and GRAND groups.
See also the plenary session talks by Anita de Waard, John Wilbanks and Cameron Neylon. All on slideshare.net.
University of Miami's Clinical and Translational Sciences Institute (CTSI) runs a bootcamp course for medical residents & fellows and postdocs. This is my 2012 version of "Questions for knowledge creators" lecture.
Intro to Big Data session, AAMC GREAT/GRAND Meeting, 2014Richard Bookman
Introductory remarks for the Big Data in Biomedical Research and Training session at the annual meeting of the AAMC GREAT & GRAND groups. Fort Worth, Texas, 9/19/2014
Univ of Miami CTSI: Citizen science seminar; Oct 2014Richard Bookman
The University of Miami's Clinical & Translational Science Institute runs a seminar course for MS students.
This talk surveys 8 citizen science projects, reviews NIH's current activities, and identifies issues for attention, particularly with ethical, legal and social implications.
Mendeley: Recommendation Systems for Academic LiteratureKris Jack
I gave this talk to an MSc class about Semantic Technologies at the Technical University of Graz (TUG) on 2012/01/12.
It presents what recommendation systems are and how they are often used before delving into how they are used at Mendeley. Real-world results from Mendeley’s article recommendation system are also presented.
The work presented here has been partially funded by the European Commission as part of the TEAM IAPP project (grant no. 251514) within the FP7 People Programme (Marie Curie).
Scott Edmunds @ Balti & Bioinformatics: New Models in Open Data Publishing. January 21st 2015. Video archive https://plus.google.com/u/0/events/cbtuikle0h2619obgjrgfu74424
The Internet, Science, and Transformations of KnowledgeEric Meyer
Talk on June 7, 2012 in the Harvard SAP Speaker Series (Office of the Senior Associate Provost for the Harvard Library).
http://www.provost.harvard.edu/harvard_library/sap_speakers_series.php
Presentation given to Pubmet 2015, Zadar, Croatia.
For the live presentation having the rich media content, please access: http://kosson.ro/webpedia/presentationsnicolaiec/Croatia2015/#/
Scott Edmunds talk at AIST: Overcoming the Reproducibility Crisis: and why I ...GigaScience, BGI Hong Kong
Scott Edmunds talk at the AIST Computational Biology Research Center in Tokyo: Overcoming the Reproducibility Crisis: and why I stopped worrying a learned to love open data (& methods), July 1st 2014
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Supervised Multi Attribute Gene Manipulation For Cancerpaperpublications3
Abstract: Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviours, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems.
They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery.
myExperiment and the Rise of Social MachinesDavid De Roure
Talk at hubbub 2012, Indianapolis, 25 September 2012. The talk introduces myExperiment and Wf4Ever, discusses the future of research communication including FORCE11, and introduces the SOCIAM project (Theory and Practice of Social Machines) which launches in October 2012.
An internal presentation to the SRI AI Center, to get people up to speed on current goings-on in open science. Tries to cover far too many things, and slides probably aren't very comprehensible by themselves.
Towards Responsible Content Mining: A Cambridge perspectivepetermurrayrust
ContentMining (Text and Data Mining) is now legal in the UK for non-commercial research. Cambridge UK is a natural centre, with several components:
* a world-class University and Library
* many publishers, both Open Access and conventional
* a digital culture
* ContentMine - a leading proponent and practitioner of mining
Cambridge University Press welcomes content mining and invited PMR to give a talk there. He showed the technology and protocols and proposed a practical way forward in 2017
Open Research Practices in the Age of a Papermill PandemicDorothy Bishop
Talk given to Open Research Group, Maynooth University, October 2022.
Describes the phenomenon of large-scale fraudulent science publishing (papermills), and discusses how open science practices can help tackle this.
This is an overview of the Data Biosphere Project, its goals, its architecture, and the three core projects that form its foundation. We also discuss data commons.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Mendeley: Recommendation Systems for Academic LiteratureKris Jack
I gave this talk to an MSc class about Semantic Technologies at the Technical University of Graz (TUG) on 2012/01/12.
It presents what recommendation systems are and how they are often used before delving into how they are used at Mendeley. Real-world results from Mendeley’s article recommendation system are also presented.
The work presented here has been partially funded by the European Commission as part of the TEAM IAPP project (grant no. 251514) within the FP7 People Programme (Marie Curie).
Scott Edmunds @ Balti & Bioinformatics: New Models in Open Data Publishing. January 21st 2015. Video archive https://plus.google.com/u/0/events/cbtuikle0h2619obgjrgfu74424
The Internet, Science, and Transformations of KnowledgeEric Meyer
Talk on June 7, 2012 in the Harvard SAP Speaker Series (Office of the Senior Associate Provost for the Harvard Library).
http://www.provost.harvard.edu/harvard_library/sap_speakers_series.php
Presentation given to Pubmet 2015, Zadar, Croatia.
For the live presentation having the rich media content, please access: http://kosson.ro/webpedia/presentationsnicolaiec/Croatia2015/#/
Scott Edmunds talk at AIST: Overcoming the Reproducibility Crisis: and why I ...GigaScience, BGI Hong Kong
Scott Edmunds talk at the AIST Computational Biology Research Center in Tokyo: Overcoming the Reproducibility Crisis: and why I stopped worrying a learned to love open data (& methods), July 1st 2014
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Supervised Multi Attribute Gene Manipulation For Cancerpaperpublications3
Abstract: Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviours, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems.
They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery.
myExperiment and the Rise of Social MachinesDavid De Roure
Talk at hubbub 2012, Indianapolis, 25 September 2012. The talk introduces myExperiment and Wf4Ever, discusses the future of research communication including FORCE11, and introduces the SOCIAM project (Theory and Practice of Social Machines) which launches in October 2012.
An internal presentation to the SRI AI Center, to get people up to speed on current goings-on in open science. Tries to cover far too many things, and slides probably aren't very comprehensible by themselves.
Towards Responsible Content Mining: A Cambridge perspectivepetermurrayrust
ContentMining (Text and Data Mining) is now legal in the UK for non-commercial research. Cambridge UK is a natural centre, with several components:
* a world-class University and Library
* many publishers, both Open Access and conventional
* a digital culture
* ContentMine - a leading proponent and practitioner of mining
Cambridge University Press welcomes content mining and invited PMR to give a talk there. He showed the technology and protocols and proposed a practical way forward in 2017
Open Research Practices in the Age of a Papermill PandemicDorothy Bishop
Talk given to Open Research Group, Maynooth University, October 2022.
Describes the phenomenon of large-scale fraudulent science publishing (papermills), and discusses how open science practices can help tackle this.
This is an overview of the Data Biosphere Project, its goals, its architecture, and the three core projects that form its foundation. We also discuss data commons.
Similar to Research 3.0 & the Future of Scholarly Communications (20)
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
10. Over the past decade, before pursuing a particular line of research, scientists (including
C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in
Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-
three papers were deemed 'landmark' studies. It was acknowledged from the outset that
some of the data might not hold up, because papers were deliberately selected that
described something completely new, such as fresh approaches to targeting cancers or
alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were
confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this
was a shocking result.
49. Producing research objects
1. Research: Each object in the system has metadata
metadata
(including provenance) and relations to other data objects
metadata added to it.
metadata 2. Workflow: All data objects used by or created in the lab are
captured within a (lab-owned) workflow system.
3. Authoring: A communication is written in an authoring tool
which can pull data objects with provenance from the workflow
tool in the appropriate representation into the document.
metadata 4. Editing and review: Once the co-authors agree, the paper is
‘exposed’ to editors, who in turn expose it to reviewers. Reports
metadata
are stored in the authoring/editing system, the paper gets
updated, until it is validated.
5. Publishing and distribution: When a document is
published, a collection of validated research objects is
exposed to the world. Document remains connected to its
Rats were subjected to two related data objects, and their provenance can be traced.
tests (click on fig 2 to see
underlying data). These results
6. (Re-)User applications: distributed applications run
suggest that the neurological (autonomously?) on the set of exposed data objects.
pain produced by ….
Some other publisher
Review
Revise
Edit
Concept modified from one developed by Anita de Waard, Ed Hovy, Phil Bourne, Gully Burns and Cartic Ramakrishnan
Image is Gerhard Richter painting.http://www.gerhardrichterpainting.com/wp-content/uploads/2012/01/gerhardrichter_photo3.jpgWhat follows are the slides from a talk to the UM library community and other interested folks.The slides are NOT a stand-alone representation of the substance of the talk. Mostly, they illustrate some aspect of a point made in my talk.
http://www.gerhardrichterpainting.com/wp-content/uploads/2012/01/gerhardrichter_photo7.jpgbroad brush strokes…mostly focus on science – natural or social….not so much humanities
Nature 483 531-533 (29 March 2012) doi:10.1038/48351ainteresting comment in May, 2012….can’t disclose which papers bec/ of confidentiality agreements between Amgen and authors of the original papers.
photograph taken of archival print of Zeller ink and acrylic drawing.this Zeller work is part of scimaps.org.http://scimaps.org/maps/map/hypothetical_model_o_51/
research ecosystem: the entire system by which humans create new knowledge, encompassing language, technology, governance, dissemination, etc.many questions we need to explore to describe the evolution of how humans do research, but today’s talk will emphasize 4.
Newton: 1643-1727 – overlaps w/ start of Phil. Trans.Charles Darwin atapprox age of Beagle voyage.
The first issue, 6 March 1665, was edited and published by the society’s first secretary, Henry Oldenburg, only six years after the Royal Society was founded. He published the journal at his own expense and had an agreement with the Royal Society that he kept any profits. He was to be disappointed, however, as the journal performed poorly during Oldenburg’s lifetime.Source: redOrbit (http://s.tt/1aL0Z)Darwin’s paper w/ Wallace:http://www.age-of-the-sage.org/philosophy/linnean_society_darwin_wallace.bmp
Initial sequencing and analysis of the human genomeNature409, 860-921 (15 February 2001) | doi:10.1038/35057062; Received 7 December 2000; Accepted 9 January 2001http://www.nature.com/nature/journal/v409/n6822/full/409860a0.html
ibid.
key change is institutionalization or corporatization of research from R1 to R2.
http://blogs.jwatch.org/hiv-id-observations/wp-content/uploads/2011/05/unintended-consequences.jpgnow the institutions have an interest that is not always aligned with either the scientist or society….let alone the science.
and scientists’ behavior can be shaped…
http://3.bp.blogspot.com/_Uf8aRwoaSJ8/TS0PqvkTw9I/AAAAAAAAANE/c372nzbb-7k/s1600/Church+of+Covenant+-+Tower+Collapse+at+construction+time+%25282%2529.jpgOne of the Washington's greatest losses in historic religious structures was the old National Presbyterian Church, originally called the Church of the Covenant, which used to rise from the southeast corner of Connecticut Avenue and N Street NW. The building, which James M. Goode has called a "dignified masterpiece in gray granite," was completed in 1889 and torn down in 1966, to be replaced by a nondescript office building.Construction of the main church began in 1887 and was nearly complete when the 158-foot Ohio-sandstone tower suddenly collapsed into a heap of rubble early on the morning of August 22, 1888. What caused the collapse? Fingers were pointed in all directions. "It was the fault of the contractor; it was the fault of the architect; it was the fault of the trustees, of the material, of the mortar, of everything and of nothing," the Post reported with exasperation. An official investigation soon concluded that the basic design was sound but that inferior materials and workmanship were to blame for the accident. The mortar, in particular, was found to be "practically worthless." The architect, contractors, and Church congregation agreed to divide the cost of reconstruction equally, and a new and very solid tower was soon standing.
George Dyson has laid out the ideas and the history of the big bang of the digital universe in a number of places including:Darwin Among the Machines (1997)Turing’s Cathedral (2012)and in this Edge Conversation: http://edge.org/conversation/a-universe-of-self-replicating-code/…and a nice video of his talk at The Perimeter Institute:http://www.youtube.com/watch?v=Tg5gJxXBh8s
http://www.theremainsoftheweb.com/wp-content/uploads/2012/06/alan.turing.2.jpg“It is possible to invent a single machine which can be used to compute any computable sequence.”
JvN: “I am thinking about something much more important than bombs. I am thinking about computers.”The IAS computer in 1952.Numbers that ‘mean’ things vs. numbers that ‘do’ things. Adjectives versus verbs.Code that can modify code.
overlay graphic: http://www.evolutionoftheweb.com/staticAl Gore invents the internet – image underneath….
data from pubmed, week of July 16, 2007.Search:# of pubs - Search (”XXXX/1/1"[Date - Publication] : ”XXXX/12/31"[Date - Publication]), where XXXX ranges from 1940 to 2012.# of citations: http://www.nlm.nih.gov/bsd/bsd_key.html
Citation: Lerch JK, Kuo F, Motti D, Morris R, Bixby JL, et al. (2012) Isoform Diversity and Regulation in Peripheral and Central Neurons Revealed through RNA-Seq. PLoS ONE 7(1): e30417. doi:10.1371/journal.pone.0030417