A keynote given on experiences in curating workflows and web services.
3rd International Digital Curation Conference: "Curating our Digital Scientific Heritage: a Global Collaborative Challenge"
11-13 December 2007
Renaissance Hotel
Washington DC, USA
Towards Responsible Content Mining: A Cambridge perspectivepetermurrayrust
ContentMining (Text and Data Mining) is now legal in the UK for non-commercial research. Cambridge UK is a natural centre, with several components:
* a world-class University and Library
* many publishers, both Open Access and conventional
* a digital culture
* ContentMine - a leading proponent and practitioner of mining
Cambridge University Press welcomes content mining and invited PMR to give a talk there. He showed the technology and protocols and proposed a practical way forward in 2017
ContentMining for France and Europe; Lessons from 2 years in UKpetermurrayrust
I have spend 2 years carrying out Content Mining (aka Text and Data Mining) in the UK under the 2014 "Hargreaves" exception. This talk was given in Paris, to ADBU , after France had passed the law of the numeric Republique. I illustrate what worked in what did not and why and offer ideas to France and Europe
How open data contribute to improving the world. The life science use case. The technical, social, ethical issues.
This was a talk given within the iGEM 2020 programme by the London Imperial College students group (https://2020.igem.org/Team:Imperial_College), in a webinar organised by the SOAPLab group on the topic of Ethics of Automation. Excellent Dr Brandon Sepulvado was the other speaker of the day.
High throughput mining of the scholarly literature; talk at NIHpetermurrayrust
The scientific and medical literature contains huge amounts of valuable unused information. This talk shows how to discover it, extract, re-use and interpret it. Wikidata is presented as a key new tool and infrastructure. Everyone can become involved. However some of the barriers to use are sociopolitical and these are identified and discussed.
Text and data mining in UK and France (ADBU - 13 Dec 16)Rob Johnson
Slides from my presentation in Paris on 13 Dec 2016, summarising the findings of our study on text and data mining in public research for the ADBU. Full report available at http://adbu.fr/etude-tdm/.
A keynote given on experiences in curating workflows and web services.
3rd International Digital Curation Conference: "Curating our Digital Scientific Heritage: a Global Collaborative Challenge"
11-13 December 2007
Renaissance Hotel
Washington DC, USA
Towards Responsible Content Mining: A Cambridge perspectivepetermurrayrust
ContentMining (Text and Data Mining) is now legal in the UK for non-commercial research. Cambridge UK is a natural centre, with several components:
* a world-class University and Library
* many publishers, both Open Access and conventional
* a digital culture
* ContentMine - a leading proponent and practitioner of mining
Cambridge University Press welcomes content mining and invited PMR to give a talk there. He showed the technology and protocols and proposed a practical way forward in 2017
ContentMining for France and Europe; Lessons from 2 years in UKpetermurrayrust
I have spend 2 years carrying out Content Mining (aka Text and Data Mining) in the UK under the 2014 "Hargreaves" exception. This talk was given in Paris, to ADBU , after France had passed the law of the numeric Republique. I illustrate what worked in what did not and why and offer ideas to France and Europe
How open data contribute to improving the world. The life science use case. The technical, social, ethical issues.
This was a talk given within the iGEM 2020 programme by the London Imperial College students group (https://2020.igem.org/Team:Imperial_College), in a webinar organised by the SOAPLab group on the topic of Ethics of Automation. Excellent Dr Brandon Sepulvado was the other speaker of the day.
High throughput mining of the scholarly literature; talk at NIHpetermurrayrust
The scientific and medical literature contains huge amounts of valuable unused information. This talk shows how to discover it, extract, re-use and interpret it. Wikidata is presented as a key new tool and infrastructure. Everyone can become involved. However some of the barriers to use are sociopolitical and these are identified and discussed.
Text and data mining in UK and France (ADBU - 13 Dec 16)Rob Johnson
Slides from my presentation in Paris on 13 Dec 2016, summarising the findings of our study on text and data mining in public research for the ADBU. Full report available at http://adbu.fr/etude-tdm/.
This workshop aims at gathering together practioners of all levels and from a variety of research areas (agronomy, plant biology, food, life sciences etc) to compare best practices, points of views and projects about producing and consuming data in the agrifood field.
As it happens in general for digital data, the current trends in this arena include integration of "traditional" semantic-based approaches (eg, ontoloies, RDF-based linked data) with lightweight schemas (eg, Bioschemas/schema.org), use of JSON-based APIs, development of data lakes and knowledge graphs based on NoSQL technologies, graph databases based on property graphs (eg, Neo4j, TinkerPop/Gremlin).
Workshop participants will get an opportunity to discuss how those approaches and technologies are being used in the agrifood field, for the purpose or realising the FAIR data principles and make data sharing a powerful tool for research, industry or socio-economic investigation. In particular, we will propose an interactive session to outline the way participant-proposed datasets can be encoded through bioschemas or similar approaches.
Scott Edmunds from GigaScience on 'Publishing in the Open Data Era", at the "Open, Crowdsource and Blockchain Science!" hangout at Hackerspace.sg, 23rd March 2015
The Open Drug Discovery Teams (ODDT) project provides a mobile app primarily intended as a research topic aggregator of predominantly open science data collected from various sources on the internet. It exists to facilitate interdisciplinary teamwork and to relieve the user from data overload, delivering access to information that is highly relevant and focused on their topic areas of interest. Research topics include areas of chemistry and adjacent molecule-oriented biomedical sciences, with an emphasis on those which are most amenable to open research at present. These include rare and neglected diseases, and precompetitive and public-good initiatives such as green chemistry.
The ODDT project uses a free mobile app as user entry point. The app has a magazine-like interface, and server-side infrastructure for hosting chemistry-related data as well as value added services. The project is open to participation from anyone and provides the ability for users to make annotations and assertions, thereby contributing to the collective value of the data to the engaged community. Much of the content is derived from public sources, but the platform is also amenable to commercial data input. The technology could also be readily used in-house by organizations as a research aggregator that could integrate internal and external science and discussion. The infrastructure for the app is currently based upon the Twitter API as a useful proof of concept for a real time source of publicly generated content. This could be extended further by accessing other APIs providing news and data feeds of relevance to a particular area of interest. As the project evolves, social networking features will be developed for organizing participants into teams, with various forms of communication and content management possible.
Mobile devices are now mainstream handheld computers providing access to computational power and storage that a decade ago was available only on desktop computers. In terms of chemistry informatics the majority of capabilities that were previously found only on desktop computers is fast migrating to mobile devices making use of the combination of powerful visualization capabilities, fast cloud-based calculations, websites optimized for the mobile platforms, and delivering “apps”. This presentation will provide an overview of how access to chemistry continues to be made increasingly mobile and specifically on how the Royal Society of Chemistry is contributing to this computing environment.
High throughput mining of the scholarly literature TheContentMine
Published on Jun 7, 2016 by PMR
Talk given to statisticians in Tilburg, with emphasis on scholarly comms for detecting unusual features. Includes demo of Amanuens.is and image mining
The Roots: Linked data and the foundations of successful Agriculture DataPaul Groth
Some thoughts on successful data for the agricultural domain. Keynote at Linked Open Data in Agriculture
MACS-G20 Workshop in Berlin, September 27th and 28th, 2017 https://www.ktbl.de/inhalte/themen/ueber-uns/projekte/macs-g20-loda/lod/
Amanuens.is HUmans and machines annotating scholarly literature TheContentMine
Published on May 19, 2016 by PMR
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
Making it Easier, Possibly Even Pleasant, to Author Rich Experimental MetadataMichel Dumontier
Biomedical researchers will remain stymied in their ability to take full advantage of the Big Data revolution if they can never find the datasets that they need to analyze, if there is lack of clarity about what particular datasets contain, and if data are insufficiently described.
CEDAR, an NIH BD2K Center of Excellence, aims to develop methods and tools to vastly ease the burden of authoring good experimental metadata, and to maximally use this information to zero in on datasets of interest.
Automatic Extraction of Knowledge from the LiteratureTheContentMine
Published on May 11, 2016 by PMR
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
Can machines understand the scientific literaturepetermurrayrust
With over 5000 scientific articles per day we need machines to help us understand the content. This material is to be used at an interactive session for the Science Society at Trinity College Cambridge UK
Mining the scientific literature for plants and chemistrypetermurrayrust
ContentMine can read the daily scientific literature and extract facts. This talk was given to the OpenPlant project - with whom ContentMine collaborate at a meeting on 2016-07-25/27 in Norwich. Examples of extracted facts are given.
Amanuens.is HUmans and machines annotating scholarly literaturepetermurrayrust
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
Professor Carole Goble, University of Manchester, talks at the RIN "Research data: policies & behaviour" event as part of a series on Research Information in Transition.
Can Computers understand the scientific literature (includes compscie material)TheContentMine
Published on Jan 24, 2014 by PMR
With the semantic web machines can autonomously carry out many knowledge-based tasks as well as humans. The main problems are not technical but the prevention of access to information. I advocate automatic downloading and indexing of all scientific information
ContentMining (aka Text and Data Mining TDM) is beneficial, legal in the UK and a few other countries. Many groups in Europe are looking to make it legal there as well but there are many vested interests who oppose it.
This short presentation shows the benefits of content mining, some of the technology, and the way that it can be used and promotedby communities of practice. I urge all attendees at CopyCamp and also the wider world to press for liberalization of Copyright
Visibilidad de la información científica, identidad digital y acreditación a...Julio Alonso Arévalo
Visibilidad de la información científica, identidad digital y acreditación académica. Visibilidad de la información científica, identidad digital y acreditación académica.
Can we use altmetric at institutional level?Torres Salinas
This paper aims at exploring the coverage of the Altmetric.com database and its potential use in order to show universities’ research profiles in relationship with other databases. Specifically, our objectives are the following:
1. Analyse the coverage of Altmetric.com at the institutional level and verify its validity as a data source for obtaining alternative metrics derived from the research activity of universities in comparison with those from the Web of Science. For this, we will work with a small sample of four Spanish universities.
2. Analyse coverage differences when obtainin bibliometric profiles from Altmetric.com and Web of Science. In some studies a higher coverage of the Social Sciences and Humanities has been reported, suggesting the potential of altmetric indicators in these areas (Costas, Zahedi, & Wouters, 2015b).
This workshop aims at gathering together practioners of all levels and from a variety of research areas (agronomy, plant biology, food, life sciences etc) to compare best practices, points of views and projects about producing and consuming data in the agrifood field.
As it happens in general for digital data, the current trends in this arena include integration of "traditional" semantic-based approaches (eg, ontoloies, RDF-based linked data) with lightweight schemas (eg, Bioschemas/schema.org), use of JSON-based APIs, development of data lakes and knowledge graphs based on NoSQL technologies, graph databases based on property graphs (eg, Neo4j, TinkerPop/Gremlin).
Workshop participants will get an opportunity to discuss how those approaches and technologies are being used in the agrifood field, for the purpose or realising the FAIR data principles and make data sharing a powerful tool for research, industry or socio-economic investigation. In particular, we will propose an interactive session to outline the way participant-proposed datasets can be encoded through bioschemas or similar approaches.
Scott Edmunds from GigaScience on 'Publishing in the Open Data Era", at the "Open, Crowdsource and Blockchain Science!" hangout at Hackerspace.sg, 23rd March 2015
The Open Drug Discovery Teams (ODDT) project provides a mobile app primarily intended as a research topic aggregator of predominantly open science data collected from various sources on the internet. It exists to facilitate interdisciplinary teamwork and to relieve the user from data overload, delivering access to information that is highly relevant and focused on their topic areas of interest. Research topics include areas of chemistry and adjacent molecule-oriented biomedical sciences, with an emphasis on those which are most amenable to open research at present. These include rare and neglected diseases, and precompetitive and public-good initiatives such as green chemistry.
The ODDT project uses a free mobile app as user entry point. The app has a magazine-like interface, and server-side infrastructure for hosting chemistry-related data as well as value added services. The project is open to participation from anyone and provides the ability for users to make annotations and assertions, thereby contributing to the collective value of the data to the engaged community. Much of the content is derived from public sources, but the platform is also amenable to commercial data input. The technology could also be readily used in-house by organizations as a research aggregator that could integrate internal and external science and discussion. The infrastructure for the app is currently based upon the Twitter API as a useful proof of concept for a real time source of publicly generated content. This could be extended further by accessing other APIs providing news and data feeds of relevance to a particular area of interest. As the project evolves, social networking features will be developed for organizing participants into teams, with various forms of communication and content management possible.
Mobile devices are now mainstream handheld computers providing access to computational power and storage that a decade ago was available only on desktop computers. In terms of chemistry informatics the majority of capabilities that were previously found only on desktop computers is fast migrating to mobile devices making use of the combination of powerful visualization capabilities, fast cloud-based calculations, websites optimized for the mobile platforms, and delivering “apps”. This presentation will provide an overview of how access to chemistry continues to be made increasingly mobile and specifically on how the Royal Society of Chemistry is contributing to this computing environment.
High throughput mining of the scholarly literature TheContentMine
Published on Jun 7, 2016 by PMR
Talk given to statisticians in Tilburg, with emphasis on scholarly comms for detecting unusual features. Includes demo of Amanuens.is and image mining
The Roots: Linked data and the foundations of successful Agriculture DataPaul Groth
Some thoughts on successful data for the agricultural domain. Keynote at Linked Open Data in Agriculture
MACS-G20 Workshop in Berlin, September 27th and 28th, 2017 https://www.ktbl.de/inhalte/themen/ueber-uns/projekte/macs-g20-loda/lod/
Amanuens.is HUmans and machines annotating scholarly literature TheContentMine
Published on May 19, 2016 by PMR
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
Making it Easier, Possibly Even Pleasant, to Author Rich Experimental MetadataMichel Dumontier
Biomedical researchers will remain stymied in their ability to take full advantage of the Big Data revolution if they can never find the datasets that they need to analyze, if there is lack of clarity about what particular datasets contain, and if data are insufficiently described.
CEDAR, an NIH BD2K Center of Excellence, aims to develop methods and tools to vastly ease the burden of authoring good experimental metadata, and to maximally use this information to zero in on datasets of interest.
Automatic Extraction of Knowledge from the LiteratureTheContentMine
Published on May 11, 2016 by PMR
ContentMine tools (and the Harvest alliance) can be used to search the literature for knowledge, especially in biomedicine. All tools are Open and shortly we shall be indexing the complete daily scholarly literature
Can machines understand the scientific literaturepetermurrayrust
With over 5000 scientific articles per day we need machines to help us understand the content. This material is to be used at an interactive session for the Science Society at Trinity College Cambridge UK
Mining the scientific literature for plants and chemistrypetermurrayrust
ContentMine can read the daily scientific literature and extract facts. This talk was given to the OpenPlant project - with whom ContentMine collaborate at a meeting on 2016-07-25/27 in Norwich. Examples of extracted facts are given.
Amanuens.is HUmans and machines annotating scholarly literaturepetermurrayrust
about 10,000 scholarly articles ("papers") are published each day. Amanuens.is is a symbiont of ContentMine and Hypothes.is (both Shuttleworth projects/Fellows) which annotates theses using an array of controlled vocabularies ("dictionaries"). The results, in semantic form are used to annotate the original material. The talk had live demos and used plant chemistry as the examples
Professor Carole Goble, University of Manchester, talks at the RIN "Research data: policies & behaviour" event as part of a series on Research Information in Transition.
Can Computers understand the scientific literature (includes compscie material)TheContentMine
Published on Jan 24, 2014 by PMR
With the semantic web machines can autonomously carry out many knowledge-based tasks as well as humans. The main problems are not technical but the prevention of access to information. I advocate automatic downloading and indexing of all scientific information
ContentMining (aka Text and Data Mining TDM) is beneficial, legal in the UK and a few other countries. Many groups in Europe are looking to make it legal there as well but there are many vested interests who oppose it.
This short presentation shows the benefits of content mining, some of the technology, and the way that it can be used and promotedby communities of practice. I urge all attendees at CopyCamp and also the wider world to press for liberalization of Copyright
Visibilidad de la información científica, identidad digital y acreditación a...Julio Alonso Arévalo
Visibilidad de la información científica, identidad digital y acreditación académica. Visibilidad de la información científica, identidad digital y acreditación académica.
Can we use altmetric at institutional level?Torres Salinas
This paper aims at exploring the coverage of the Altmetric.com database and its potential use in order to show universities’ research profiles in relationship with other databases. Specifically, our objectives are the following:
1. Analyse the coverage of Altmetric.com at the institutional level and verify its validity as a data source for obtaining alternative metrics derived from the research activity of universities in comparison with those from the Web of Science. For this, we will work with a small sample of four Spanish universities.
2. Analyse coverage differences when obtainin bibliometric profiles from Altmetric.com and Web of Science. In some studies a higher coverage of the Social Sciences and Humanities has been reported, suggesting the potential of altmetric indicators in these areas (Costas, Zahedi, & Wouters, 2015b).
Societal Impact
Nicolas Robinson Garcia, INGENIO (UPV-CSIC), Universitat Politècnica de València, Spain / Daniel Torres-Salinas, Universidad de Navarra and Universidad de Granada (EC3metrics & Medialab UGR), Spain
Recently there is an increasing pressure on the development of indicators and methodologies that can offer evidences of the societal impact of researchers’ activity. This presentation will offer a comprehensive overview on the definition of societal impact, types of impact, and the attribution problem when searching for potential indicators. A special attention will be given to altmetric indicators and their potential role in tracing social engagement and its relation with societal impact. Examples of potential uses and current lines of work will be presented.
***************************
Scientometric procedures are increasingly used to analyse developments and trends in science and technology. Decisions to be taken often have severe implications. Consequently data handling, indicator construction and interpretation require competent expert knowledge, which is currently only available to a limited extent for all stakeholders in Central Europe not the least due to lacking training opportunities. Responding to the lack of a pertinent scientometrics education (especially in German speaking countries) and to the increasing demand (particularly of research quality managers), the University of Vienna (A), the German Centre for Higher Education Research and Science Studies - DZHW (D) and the Katholieke Universiteit Leuven (B) joined cooperatively to found the European Summer School for Scientometrics (esss) in 2010.
Inicitativas empresariales relacionadas con la evaluación de la ciencia el ...Torres Salinas
Presentación realizada en el marco de los cursos de la UPV/EHU. UDA IRASTAROAK CURSOS DE VERANO UPV/EHU Curso de Verano: Evaluación de la actividad investigadora e iniciativas de apoyo al investigador. 19.jul - 20.jul
Cómo se evalúa y se progresa en la carrera científicaTorres Salinas
CÓMO SE EVALÚA Y SE PROGRESA EN LA CARRERA CIENTÍFICA
Las cinco etapas para convertirse en un investigador senior
Daniel Torres-Salinas
Jornadas de promoción
de la investigación para estudiantes de posgrado
carrera investigadora y proyectos de investigación
Universidad de Granada
I Plan de Promoción de la Investigación
Vicerrectorado de Investigación y Transferencia
Escuela Internacional de Posgrado
7 de Febrero de 2017- Salón de Actos Edificio Politécnico
Actas de la Jornada #appugr: aplicaciones móviles orientadas a la investigaci...Torres Salinas
La utilización masiva de smartphones y otros dispositivos móviles han traído nuevas formas de expresión, comunicación e interacción con el entorno gracias a las apps. La actividad científica no ha sido ajena al fenómeno app y éstas se han ido incorporando a las tareas habituales de los investigadores (recopilación de datos, difusión, medición, etc…). Desde el Vicerrectorado de Investigación y Transferencia consideramos que la incorporación de las apps a los proyectos de i+d es un elemento innovador y distintivo que pueden mejorar las peticiones de financiación. Por ello, con este evento, hemos querido introducir a la comunidad investigadora de la UGR al universo de las apps, facilitar su desarrollo poniéndolos en contacto con empresas de Granada y mostrar el uso de apps con experiencias reales desarrolladas en la UGR. Con estas actas, un tanto informales, queremos recoger parte de las comunicaciones y materiales que se presentaron, pero sobre queremos agradecer la acogida que tuvo la jornada y la labor de todos los compañeros y servicios de la UGR que participaron en su organización
Los trabajos fin de Grado: definición y tipología. La estructura de un trabajo científico.
El trabajo de investigación: definición y estructura. IMRYD: Introducción, Material y Métodos, Resultados, Discusión y conclusiones.
El trabajo de revisión bibliográfica, definición y estructura: introducción, fuentes de información y metodología de búsqueda, resultados, conclusiones.
La memoria profesional, definición y estructura: introducción, la empresa, resultados, conclusiones.
La redacción: fases. Redacción del borrador, revisiones del borrador (contenido, estilo, presentación material).
¿Cómo escribir un artículo científico? Título, resumen, palabras clave, introducción, metodología, resultados, tablas, figuras, agradecimientos, citas y referencias bibliográficas.
Bibliotecas, bibliotecarios y otros submundos. Frases y citas sobre bibliote...Julio Alonso Arévalo
Compilación de algunas de las frases, citas en torno al libro, las bibliotecas, los bibliotecarios y la información compiladas por Julio Alonso Arévalo. Puedes encontrar más en el blog Universo Abierto en la sección PreTextos -https://universoabierto.com/tag/pretextos/
Cómo utilizar google scholar para mejorar la visibilidad de nuestra producció...Torres Salinas
CONTENIDOS DEL CURSO: 1.– IMPORTANCIA DE GS. 2.- FOTOGRAFÍA GENERAL DE GS 3. - CÓMO HACER QUE NUESTROS DOCUMENTOS SE INDEXEN EN GS 4.- CÓMO CREAR Y GESTIONAR UN PERFIL. 5.- PRÁCTICA: CREANDO NUESTRO PERFIL
Publicar en Revistas Científicas de Impacto: Competencia y Colaboración.Universidad de Málaga
Presentación que acompaña el discurso del profesor (por tanto no es autónoma) dentro del "Seminario de Investigación AF8 Tendencias de Investigación en Comunicación y Educación"
Traspasando el muro acceso abierto a la cienciaTorres Salinas
CONTENIDOS TEÓRICOS
Origen
Filosofía
Autoarchivo
Revistas
Imposturas
Visibilidad
Posicionamiento
Kit
PRÁCTICA CENTRADA EN:
¿Cómo poner en OA los artículos?
¿Cómo poner en OA el material complementario?
¿Cómo poner en OA los datos de investigación?
En el nuevo ecosistema informativo se están produciendo cambios profundos. Los medios sociales están modificando la forma de interactuar, presentar las ideas e información y juzgar la calidad de los contenidos y contribuciones. En los últimos años han surgido cientos de plataformas que permiten compartir libremente todo tipo de información y conectarnos a través de redes. Estas nuevas herramientas generan estadísticas de actividad e interacciones entre sus usuarios, tales como menciones, retweets, conversaciones, comentarios. Como afirma Eric Qualman “Los medios sociales no son una moda, son un cambio fundamental en la forma en que nos comunicamos.” A la par de estos cambios, la mayor parte de los investigadores han trasladado sus actividades de investigación a la web y con el éxito de los medios sociales esta situación se ha hecho más evidente, ya que estas herramientas tienen más potencialidad para desarrollar un rango mayor de influencia académica que los entornos tradicionales de publicación. Las posibilidades que ofrecen las tecnologías participativas facilitan que los autores puedan compartir información, favorecer el descubrimiento científico y la visibilidad de la investigación a través de bases de datos, plataformas y servicios de apoyo a los procesos de una investigación. Todo esto se ha visto favorecido por los avances que están impulsando una ciencia más interconectada y abierta con avances asombrosos en los sistemas de identificación de obras y de autores. Este proceso está teniendo su incidencia en la necesidad de que los investigadores conozcan, utilicen y gestionen los mecanismos de valoración, acreditación y potenciación de la visibilidad científica de sus publicaciones, lo que a su vez incide en el desarrollo de la carrera personal del investigador, pero también de manera colectiva en la calidad de las propias universidades, cuya medición se basa fundamentalmente en los ranking elaborados a partir de los datos de investigación de sus académicos. Todo ello está poniendo de relieve la importancia más que nunca la necesidad por parte de quienes investigan de conocer los mecanismos de edición, comunicación, medición y promoción.
El término, Web 2.0 fue acuñado por Tim O'Reilly en 2004 para referirse a una segunda generación en la historia de la Web basada en comunidades de usuarios y una gama especial de servicios que fomentan la colaboración y el intercambio ágil de información entre los usuarios.
Conjunto de herramientas y recursos que amplían las funciones de la Web tradicional basándose en la misma filosofía: inteligencia colectiva y arquitectura de la participación.
Pautas la elaboracion de proyectos: convocatoria retos y excelenciaTorres Salinas
Resumen: Con este seminario se pretende asesorar a los investigadores en la elaboración y presentación de sus proyectos de investigación, en base a nuestra experiencia en la Unidad. Se pretende con ello evitar los errores de convocatorias anteriores y lograr un mayor éxito en la concesión de proyectos y más financiación de los nuevos. También es interesante la perspectiva que nos ofrecen investigadores que han actuado como evaluadores de la ANEP con más experiencia en este campo.
Gestión y Monitorización de la información y el impacto científico: Perfiles,...Torres Salinas
Curso de 20 Horas con Daniel Torres-Salinas, Nicolás Robinson García, Esteban Romero Frías y Evaristo Jiménez Contreras.
Este curso surge como respuesta a la gran diversificación de fuentes de información y bases de datos científicas existentes en los últimos años, así como las numerosas iniciativas que surgen para la gestión y monitorización del Currículum Vitae del investigador. Por un lado, el curso persigue dotar al investigador de los conocimientos y capacidades necesarias para monitorizar y establecer alertas temáticas en las diferentes bases de datos sobre su área de interés. Por otro lado, el curso persigue dotarlo de las herramientas necesarias para gestionar de manera eficiente y práctica su CV científico.
Resumen: El curso está estructurado en cuatro módulos dedicados a cada una de las facetas que se pretenden desarrollar. Así pues, el primero es introductorio y pretende dotar al alumnado de las nociones básicas en el manejo de bases de datos científicas así como de los indicadores bibliométricos que cada una de ellas ofrece. El segundo módulo está dedicado a la creación de perfiles científicos para monitorizar y aumentar la visibilidad de nuestra producción científica. El tercer módulo se centra en el manejo y creación de sistemas de alertas tanto bibliográficas como de impacto, así como el uso de gestores bibliográficos sociales. El último módulo incide en el uso de identificadores de autor así como en la gestión, exportación e importación del CV científico a distintas plataformas y formatos, con especial incidencia en el CVN, ORCID, ResearcherID y el Author Identifier de Scopus.
Contenidos: Módulo I. Fuentes de información científica e indicadores bibliométricos - Módulo II. Creación de perfiles científicos - Módulo III. Sistemas de alertas - Módulo IV. Identificadores y códigos de autores
Diez reglas de oro para publicar en revistas de impactoTorres Salinas
Publicar en las denominadas revistas científicas de impacto se ha convertido en el objetivo principal de investigadores e instituciones de I+D. En este curso se presentan 10 consejos para maximizar las posibilidades de aceptación de los manuscritos enviados a este tipo de revistas. Por ello desarrollamos algunos aspectos a considerar durante la preparación de los artículos como la selección de coautores, la presentación de tablas, gráficas e ilustraciones o la elaboración de materiales complementarios. Asimismo se ofrecen consejos a tener en cuenta durante el proceso de envío del manuscrito así como del proceso de revisión por pares.
Presentation of a descriptive anaysis of the DCI from Thomson Reuters by Daniel Torres-Salinas, Evaristo Jiménez-Contreras and Nicolás Robinson-García at the STI Conference held in Leiden (The Netherlands) 3-5 september 2014 sti2014.cwts.nl
Methodologies for Addressing Privacy and Social Issues in Health Data: A Case...Trilateral Research
Huge quantities of complex and diverse data are generated everyday in healthcare institutions, including clinical documentation (diagnostics, lab data, imaging data, etc.), administrative data, activities and cost data, and R&D data from clinical trials.
Scott Edmunds: Channeling the Deluge: Reproducibility & Data Dissemination in...GigaScience, BGI Hong Kong
Scott Edmunds talk at the 7th Internation Conference on Genomics: "Channeling the Deluge: Reproducibility & Data Dissemination in the “Big-Data” Era. ICG7, Hong Kong 1st December 2012
"
The need for a transparent data supply chainPaul Groth
Illustrating data supply chains and motivating the need for a more transparent data supply chain in the context of responsible data science. Presented at the 2018 KNAW-Royal Society bilateral meeting on responsible data science.
Knowledge Science for AI-based biomedical and clinical applicationsCatia Pesquita
The great barrier to AI adoption in healthcare and biomedical research is lack of trust.
Assessing trustworthiness requires data, domain and user context, which can be supported by ontologies, knowledge graphs and FAIR data.
Harnessing Edge Informatics to Accelerate Collaboration in BioPharma (Bio-IT ...Tom Plasterer
As scientists in the life sciences we are trained to pursue singular goals around a publication or a validated target or a drug submission. Our failure rates are exceedingly high especially as we move closer to patients in the attempt to collect sufficient clinical evidence to demonstrate the value of novel therapeutics. This wastes resources as well as time for patients depending upon us for the next breakthrough.
Edge Informatics is an approach to ameliorate these failures. Using both technical and social solutions together knowledge can be shared and leveraged across the drug development process. This is accomplished by making data assets discoverable, accessible, self-described, reusable and annotatable. The Open PHACTS project pioneered this approach and has provided a number of the technical and social solutions to enable Edge Informatics. A number of pre-competitive consortia and some content providers have also embraced this approach, facilitating networks of collaborators within and outside a given organization. When taken together more accurate, timely and inclusive decision-making is fostered.
Slides contain information about why bioinformatics appeared,
who bioinformaticians are, what they do, what kind of cool applications and challenges in bioinformatics there are.
Slides were prepared for the Bioinformatics seminar 2016, Institute of Computer Science, University of Tartu.
Ross Wilkinson - Data Publication: Australian and Global Policy DevelopmentsWiley
Australia invests $AUD1-2B per annum in research data. Like most countries, it wants to get the best return possible on this data. Europe is spending E1.4B on their open data “pilot”. This means the data should be FAIR: findable, accessible, interoperable, and reusable. Part of this is that data should be routinely “published” and available in a “data repository”. But what does this mean?
Ross Wilkinson
CEO, Australian National Data Service
Presented at the 2015 Wiley Publishing Seminar, 5 November, Melbourne, Australia.
Similar to How many citations are there in the Data Citation Index? (20)
Bibliometrics in practice: how to generate reports for institutions - v2.0 / ...Torres Salinas
In an institutional context and at a professional level, one of our main tasks is to carry out bibliometric reports. These studies are essential because they are used by managers to make decisions (distribution of funds, recruitment of personnel, planning of research lines, etc.). In this talk we will explain how to make a global bibliometric report of an institution, we use as a case study the University of Granada. We focus on these topics: 1) General considerations: target, selection of indicators, objectives, etc.); 2) what sources of information can be used; 3) How to contextualize and interpret the indicators; 4) How to compare the results with other institutions (Benchmarking); 5) How to make graphs and tables; and 6) Dissemination of results and data.
Journal impact measures: the Impact FactorTorres Salinas
The seminar on impact measures will first shed light on the best known and most controversial indicator, namely Garfield’s Journal Impact Factor. Its strengths and weaknesses as well as its correct use will be discussed thoroughly. Moreover the corresponding analytical tool, Clarivate Analytics’s Journal Citation Reports will be demonstrated.
Presented at the european summer school for scientometrics ESSS - July 16th, 2019 - Louvain
Bibliometrics in practice: how to generate reports for institutions.Torres Salinas
In an institutional context and at a professional level, one of our main tasks is to carry out bibliometric reports; actually these studies are essential because they are used by managers to make decisions (distribution of funds, recruitment of personnel, planning of research lines, …). In this talk we will explain how to make a global bibliometric report of an institution, we use as a case study the University of Granada. We focus in these topics. 1) General considerations: target, selection of indicators, objetives, …) 2) what sources of information can be used. 3) How to contextualize and interpret the indicators. 4) How to compare the results with other institutions (Benchmarking). 5) How to make graphs and tables. 6) dissemination of results and data
Cómo seleccionar y publicar en revistas de impacto en Ciencias SocialesTorres Salinas
Publicar en las denominadas revistas científicas de impacto se ha convertido en el objetivo principal de investigadores. Este curso está divido en dos partes; en la primera nos centramos en los aspectos para encontrar, identificar y seleccionar las revistas científicas más adecuadas para publicar nuestros trabajos. En la segunda parte se presentan algunos consejos para maximizar las posibilidades de aceptación de los manuscritos enviados a este tipo de revistas: desarrollamos aspectos como la preparación del manuscrito, la autoría, la elaboración de tablas y gráficas, la preparación de referencias bibliográficas o el proceso de envío y evaluación de los manuscritos.
Fecha y Horario: miércoles día 21 de Febrero 2017 - 10:00h - 13:00 h
Contacto: Daniel Torres-Salinas - torressalinas@go.ugr.es
Profesorado: Daniel Torres-Salinas (Unidad de Evaluación de la Actividad Científica)
Lugar celebración: Facultad de Ciencias de la Educación - Aula Magna
La Segunda Guerra Mundial en los videojuegosTorres Salinas
Esta presentación se dividen en tres partes:
Parte 1 (Daniel Torres Salinas): La Segunda Guerra Mundial en los videojuegos
Parte 2 (Javier Cantón y Daniel Torres Salinas): Antología comentada de juegos
Parte 3 (Wenceslao Arroyo): Introducción al gameplay
Presentada en el Gameplay de Call of Duty World War II
Organiza: Medialab UGR
Fecha: 24 de noviembre de 2017
Aportaciones presentadas en la I Reunión de Servicios de Evaluación CientíficaTorres Salinas
En esta presentación se incluyen las presentaciones:
Daniel Torres-Salinas. Casos prácticos de evaluación científica en un Vicerrectorado de Investigación: informes, planes propios y convocatorias
Daniel Torres-Salinas. Ranking Knowmetrics de universidades, el impacto en las redes sociales (altmetrics) de las universidades españolas
Presentadas en la I Reunión de servicios de evaluación científica en los vicerrectorados de investigación: ¿qué necesitan nuestras universidades y gestores?
Organiza: Vicerrectorado de Investigación y Transferencia de la Universidad de Granada
Dirigido a: La reunión está dirigida a gestores de I+D que trabajen habitualmente con indicadores bibliométricos, técnicos encargados de tareas evaluativas, bibliotecarios y profesionales de la información responsables de los servicios de apoyo a la investigación y en general cualquier profesional o investigador interesado en el mundo de la bibliometría
Fecha: 26 y 27 de Octubre de 2017
Financia: Plan Propio de Investigación y Transferencia, P22 - Visiting Scholars
Bibliometric solutions for identifying potential collaboratorsTorres Salinas
EC3metrics participa en la “European Summer School for Scientometrics” (ESSS) 2017 que tiene lugar en Berlín (Alemania) del 17 al 22 de septiembre de 2017. Este evento se viene celebrando anualmente desde 2010 y está organizado por la University of Vienna, el German Centre for Higher Education Research and Science Studies (DZHW alemán), la Katholieke Universiteit Leuven y EC3metrics, que desde 2017 es miembro del comité organizador. ESSS es una iniciativa que se creó en 2010 en respuesta a una falta de formación en cienciometría, -especialmente en los países de habla alemana- y por el aumento de esta demanda por parte de responsables políticos, directores, gestores de investigación, científicos, especialistas en información y bibliotecarios. Así, siguiendo el modelo de eventos anteriores, este año el tema del curso será “Identificación de focos de investigación. Perfiles institucionales y nacionales y colaboraciones estratégicas” (Identification of Research focuses. National & Institutional Profiles and Strategic Partnerships).
Daniel Torres-Salinas y Nicolás Robinson-García son miembros del comité organizador en representación de EC3metrics. Asimismo, participan como docentes. El próximo jueves 21 de septiembre, Nicolás Robinson-García y Daniel Torres-Salinas presentarán el seminario “Bibliometric solutions for identifying potential collaborators”.
Abstract: Bibliometric indicators and methodologies are commonly used for benchmarking institutions and individuals, and analyzing their research performance. Their potential for identifying partners and promoting collaboration is many times overseen by research institutions. In this presentation we will discuss different indicators and methodologies that can be used to spot institutions, research groups and individuals working on similar research fronts. By using different visualization techniques, we will provide examples on how to present these data in an appealing way which can inform university and research managers. These types of analyses are useful when searching for potential partners or designing strategies to establish scientific collaboration networks.
Altmetric beauties. ¿Cuáles son los trabajos científicos con mayor impacto e...Torres Salinas
Comunicación. Daniel Torres-Salinas. Altmetric Beauties. ¿Cuáles son los trabajos científicos con mayor impacto en las redes sociales?. I Congreso Internacional “Territorios Digitales”. Universidad de Granada, Medialab ugr, Granada, 29 y 30 de Junio 2017.
Cómo se evalúa y se progresa en la carrera científica [versión 2.0]Torres Salinas
Presentación realizada para el encuentro “ERA CAREER DAY TOLEDO” La Carrera Investigadora en Europa: ¿Es (im)posible en España?, organizado por la Universidad de Castilla-La Mancha y que tuvo lugar los días 23 y 24 de mayo de 2017 en el Campus de la Antigua Fábrica de Armas de Toledo, se enmarca dentro del Proyecto EUESCADA, que pretende ofrecer la máxima información posible a universitarios e investigadores en España que se encuentren evaluando sus opciones profesionales. A lo largo de los dos días de duración del evento se mostró información sobre las distintas vertientes profesionales, el conjunto de posibilidades en la movilidad geográfica en la carrera investigadora, las capacidades y habilidades del personal investigador que más valoran las empresas y más temas relacionados. Su principal objetivo es aumentar la empleabilidad de los investigadores y mejorar su planificación en el desarrollo de su carrera profesional.
El cv científico y su visibilidad: formatos, gestión y difusión en InternetTorres Salinas
Este curso surge como respuesta a la gran diversificación de fuentes de información que surgen para la gestión y monitorización del Currículum Vitae del investigador. Por un lado, el curso persigue dotar al investigador de los conocimientos y capacidades necesarias para preparar de forma rápida un currículum en los formatos CVA y CVN y en la interacción entre diferentes bases de datos para el intercambio de información curricular. Asimismo introducimos determinados indicadores bibliométricos que son útiles a la hora de defender nuestro cv científico. Seguidamente nos centramos en cómo visibilizar y posicionar nuestro cv científico en internet; nos concentramos especialmente en cómo diseñar una página web curricular de carácter científico y en las plataformas digitales (orcid, researcherid, researchgate o google scholar) más adecuadas para difundir nuestra producción científica.
10 mejores apps para científicos #investigacionmovilTorres Salinas
Resumen: En gran parte de las actividades humanas se ha generalizado la utilización de smartphones y tabletas trayendo nuevas formas de expresión y comunicación. Uno de los factores del éxito de estos dispositivos es el uso de las aplicaciones (apps) que han cambiado la forma en la que buscamos, recopilamos e intercambiamos información. Las actividades académicas y científicas no han sido ajenas a esta tendencia. Por ello en este curso hemos seleccionado las 10 mejores aplicaciones móviles para investigadores. Hemos seleccionado entre cientos de apps tanto aquellas que se pueden emplear en el día a día de un investigador como aquellas que ilustren como las apps pueden usarse como un instrumento más de recolección y medición de datos en nuestras investigaciones. Contenido: 1. Breve introducción a las apps 2. Selección de la mejores apps en:
Gestión de archivos y documentos en la nube Búsqueda y suscripción de contenidos científicos Recolección y medición de datos científicos
Cómo seleccionar una revista científica en comunicaciónTorres Salinas
Comunicación presentada en I Congreso Internacional Comunicación y Pensamiento. Comunicracia y desarrollo Social
Descripción del evento:
El Grupo de Investigación en Estructura, Historia y Contenidos de la comunicación (GREHCCO) y el Laboratorio de Estudios en Comunicación (LADECOM) de la Facultad de Comunicación de la Universidad de Sevilla, acoge y promueve este primer encuentro científico, que tendrá lugar en Sevilla los próximos 9 , 10 y 11 de marzo de 2016.
EC3metrics: Emprender y transferir desde las ciencias socialesTorres Salinas
Presentación realizada en el Segundo Workshop Fundación Séneca Emprender desde la Ciencia: un reto para el crecimiento inteligente Wf(+).
“Emprender desde la Ciencia: un reto para el crecimiento inteligente” pretende motivar, partiendo del testimonio directo de emprendedores científicos, a investigadores en disposición de poner en marcha ideas o proyectos basados en su experiencia científica o tecnológica y susceptibles de convertirse en nuevas iniciativas empresariales. Asume así el reto de mostrar directamente la experiencia de que quienes lo han hecho y se dirige preferentemente a quienes mejor pueden aprovecharla. Son aquellos que, a pesar de los obstáculos, han pasado a la acción y conseguido, bajo las distintas fórmulas que se someten a debate y en diferentes ámbitos del conocimiento, dar una dimensión distinta a su actividad investigadora, convertir las ideas en productos y servicios valiosos y contribuir al reto de un crecimiento que toma como base el mejor conocimiento, pasando de las ideas al mercado.
Torres salinas. egociencia reputación on-line para científicosTorres Salinas
OBJETIVOS DE LA PONENCIA
Comparar la reputación científica tradicional con la reputación on-line
Introducir básicamente a los conceptos de la reputación on-line: identidad digital, qué se dice y posicionamiento
Propuestas y preguntas que debemos hacernos para realizar una gestión adecuada de nuestra presencia en la web
Altmetrics indicadores, utilidades y limitacionesTorres Salinas
Se realiza una revisión de las altmetrics o indicadores alternativos. Este concepto se define como la creación y estudio de nuevos indicadores, basados en la web 2.0, para el análisis de la actividad científica y académica. La idea que subyace es que, por ejemplo, las menciones en blogs, el número de tuits o el de personas que guardan un artículo en su gestor de referencias puede ser una medida válida del uso y repercusión de las publicaciones científicas. En este sentido, estas medidas se han situado en el centro del debate de los estudios bibliométricos cobrando especial relevancia. En este curso se ilustran en primer lugar las plataformas e indicadores principales de este tipo de medidass eguidamente se realiza un repaso por los principales estudios empíricos realizados, deteniéndonos en las correlaciones entre indicadores bibliométricos y alternativos. Se finaliza, a modo de reflexión, señalando las principales limitaciones y el papel que las altmetrics pueden desempeñar a la hora de captar la repercusión de la investigación en las plataformas de la web 2.0.
Curso 4º ed, cómo publicar en revistas científicas de impacto consejos y reg...Torres Salinas
Publicar en las denominadas revistas científicas de impacto, identificadas como aquellas indexadas en las bases de datos de Thomson-Reuters, se ha convertido en el objetivo principal de investigadores e instituciones de I+D. Por ello en este curso se presentan algunos consejos para maximizar las posibilidades de aceptación de los manuscritos enviados a este tipo de revistas. En primer lugar definimos qué es una revista de impacto y sus beneficios tanto para investigadores como instituciones. A continuación desarrollamos algunos aspectos a considerar durante la preparación del manuscrito como la autoría, la elaboración de tablas y gráficas o la preparación de referencias bibliográficas. Una vez elaborado el manuscrito nos concentramos en los criterios fundamentales para seleccionar adecuadamente la revista. Por último se repasan diferentes factores a tener en cuenta durante el proceso de envío para, una vez enviado, centrarnos en el proceso de revisión por pares y la respuesta a los revisores.
Índice del curso
PREFACIO ● Los 7 hábitos
1. Introducción: ● Qué es una revista de impacto ● Por qué publicar en revistas de impacto ● Excusas y cambios
2. Antes del manuscrito: ● Colaboradores y firmas ● Selección de la revista
3. Preparando el manuscrito: ● Sobre la bibliografía ● Tablas y gráficos ● Las normas de la revista y el english ● La revisión por colegas y los agradecimientos
4. Enviando el manuscrito: ● La cover letter ● Datos y material complementario ● Últimos pasos
5. El proceso de revisión por pares: ● Funcionamiento ● Las decisiones de la revisión ● La respuesta a los revisores ● Manuscritos rechazados
6. Ideas finales y bibliografía básica
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
4. Rationale
“The‘dirtylittlesecret’ behindthepromotionof data sharingisthatnotmuchsharingmaybetakingplace”
Borgman, 2012
“Thelackof recognitionincentives isregardedas a crucial and unresolvedobstacletoestablishinga data sharingculture”
Piwowaret al., 2008
6. Data CitationIndex
GENERAL DESCRIPTION
Multidisciplinarydatabaselaunchedin 2012
Itindexes data repositoriesfromallscientificfieldsalongwithcitationdata associatedtothem
Followsanevaluationand selectionprocessat thelevelof repositorybasedon: subject, editorial contentand geographicoriginand scope
7. Data CitationIndex
PUBLICATION TYPES
Data repositoriesa databasecomprisingdatasetsand data studieswhichstoresand providesaccesstotherawdata
Datasetsa single or coherent set of data or a data file provided by the repository, as part of a collection, data study or experiment.
Data studiesdescription of studies or experiments held in repositories with the associated data which have been used in the data study.
8. Data CitationIndex
MATERIAL AND METHODS
Data retrievalin May-June 2013
Analysisbyareas: Science, Engineering& Technology, Social Sciencesand Arts& Humanities
arXiv:1306.6584
13. Datasets Citat ions Data studies Citat ions
Engineering & Technology 1545 890 240 26
Humanit ies & Arts 44588 1 6847 20459
Science 2004449 293193 114338 26189
Social Sciences 424952 7 37855 69659
Data Citation Index
RECORDS AND CITATIONS BY AREA AND TYPE
14. Data CitationIndex
TOP 10 CATEGORIES HIGHLY CITED FOR DATASETS0.000.501.001.500% 10% 20% 30% 40% 50% CrystallographyBiochemistry & Mol. BiologyGenetics & HeredityGeosciencesPhysics, Atomic, MolecularEvolutionary BiologyCell BiologySpectroscopyMedical Laboratory Tech. Nanoscience & Nanotech. Citation average andstandard deviation% of total citations from DCI 47% 23% 16%
15. Data CitationIndex
TOP 10 CATEGORIES HIGHLY CITED FOR DATA STUDIES051015202530350% 10% 20% 30% SociologyDemographyEconomicsBusinessPolitical ScienceBiochemistry & Mol. BiologyGenetics & HeredityHealth Care SciencesCriminology & PenologyFamily StudiesCitation average andstandard deviation% of total citations from DCI 30%
16. Data CitationIndex
MAIN REPOSITORIES IN THE DCI, CITATIONS & RECORDS 0200004000060000800001000001200001400001600000100000200000300000400000500000600000700000MiRBaseGene Expression UniProt knowledgebaseCrystallography Open DatabaseU.S. Census Bureau TIGERProteinData BankArrayExpress ArchivePANGEAUK DATAARCHIVEInter-university Consortium for Political and Social ResearchAnimal QTL Database TotalNumber of citations in the Data Citation Index TotalNumber of records indexed the Data Citation IndexSize= Total CitationsPie Chart= % of citationsLEGEND