This document discusses publishers' role in ensuring publication ethics. It argues that publishers share responsibility with editors for the integrity of their publications. While misconduct such as plagiarism, fabrication and falsification are rare, occurring in around 2% of studies, publishers can work to prevent, detect and appropriately respond to misconduct through educating authors and implementing screening tools and clear policies. The document also notes that maintaining editorial independence and avoiding conflicts of interest are important for upholding ethics and trust in academic publishing.
This document provides an agenda for the "Pharmaceutical Publication Planning & Management" conference on July 30-31, 2015 in Philadelphia. The agenda includes sessions on developing effective global publication strategies, managing objectives and timelines, measuring the success of publications, and best practices for peer review. Industry thought leaders from companies like Celgene, Bristol-Myers Squibb, Novartis, and Shire will present on successfully implementing publication plans and partnerships across organizations.
The document provides a checklist for developing marketing publications that includes:
1) Defining the target audience and publication objectives.
2) Planning adequate time for the publication process, which typically takes 4-6 weeks.
3) Outlining the typical stages of publication development including planning, writing, design, proofing, printing, and mailing.
4) Emphasizing the importance of client feedback and approval at key stages to maintain the project timeline.
Liz Wager's 2011 CSE presentation on editors finding misconductIvan Oransky
The document discusses steps that journal editors can take to deter and detect research and publication misconduct. It recommends that editors educate authors, promote good research practices, be aware of how policies may influence behavior, inform authorities of misconduct, and correct the literature. While editors cannot prevent all misconduct, the document encourages screening submissions using plagiarism detection software, checking images for manipulation, and considering data for irregularities. Editors are advised to acknowledge when misconduct occurs and work to prevent, detect, and correct it through guidance, education, and collaboration.
This document discusses the challenges and considerations for starting a biotech company focused on developing drugs for Alzheimer's disease. It outlines key steps like forming a corporation, creating a business plan, identifying founders and funding sources. Funding options include SBIR/STTR grants, venture philanthropy, angels and VC. The document also discusses licensing deals with pharma and the types of data and milestones companies seek. It suggests academic collaborations and virtual models using CROs to mature technologies and de-risk programs before partnering.
Sweet Sensors describes small, inexpensive diagnostic devices that can detect a variety of biological markers through quantitative tests adapted to many targets, including recreational drugs, heavy metals, toxins, bacteria, viruses, pharmaceuticals, and environmental and food safety hazards. The devices are described as having small form factors and being able to detect targets through adaptable tests in a quantitative manner.
This document summarizes a presentation about sharing best teaching practices for biomedical engineering design teams. It discusses what constitutes best practices, including project results, succession planning, evaluations, and recognition. It then outlines the structure and goals of a biomedical engineering design team course at Johns Hopkins University, including forming student teams, selecting clinical problems to address, designing prototypes, testing, and redesigning. The course has led to successful startup companies and licensed technologies. The document emphasizes fostering relationships, empowering students, and critically evaluating programs.
Essay On Biodiversity (75. Online assignment writing service.Aliyahh King
1. John Deere Component Works (JDCW) faced unsuccessful competitive bids due to deficiencies in its existing costing system.
2. The report analyzes JDCW's current standard costing system and recommends Activity Based Costing (ABC) as a superior alternative.
3. ABC more accurately assigns overhead costs by tracing them to the activities that drive those costs, rather than relying on volume-based allocation rates as in the traditional system.
This document provides an agenda for the "Pharmaceutical Publication Planning & Management" conference on July 30-31, 2015 in Philadelphia. The agenda includes sessions on developing effective global publication strategies, managing objectives and timelines, measuring the success of publications, and best practices for peer review. Industry thought leaders from companies like Celgene, Bristol-Myers Squibb, Novartis, and Shire will present on successfully implementing publication plans and partnerships across organizations.
The document provides a checklist for developing marketing publications that includes:
1) Defining the target audience and publication objectives.
2) Planning adequate time for the publication process, which typically takes 4-6 weeks.
3) Outlining the typical stages of publication development including planning, writing, design, proofing, printing, and mailing.
4) Emphasizing the importance of client feedback and approval at key stages to maintain the project timeline.
Liz Wager's 2011 CSE presentation on editors finding misconductIvan Oransky
The document discusses steps that journal editors can take to deter and detect research and publication misconduct. It recommends that editors educate authors, promote good research practices, be aware of how policies may influence behavior, inform authorities of misconduct, and correct the literature. While editors cannot prevent all misconduct, the document encourages screening submissions using plagiarism detection software, checking images for manipulation, and considering data for irregularities. Editors are advised to acknowledge when misconduct occurs and work to prevent, detect, and correct it through guidance, education, and collaboration.
This document discusses the challenges and considerations for starting a biotech company focused on developing drugs for Alzheimer's disease. It outlines key steps like forming a corporation, creating a business plan, identifying founders and funding sources. Funding options include SBIR/STTR grants, venture philanthropy, angels and VC. The document also discusses licensing deals with pharma and the types of data and milestones companies seek. It suggests academic collaborations and virtual models using CROs to mature technologies and de-risk programs before partnering.
Sweet Sensors describes small, inexpensive diagnostic devices that can detect a variety of biological markers through quantitative tests adapted to many targets, including recreational drugs, heavy metals, toxins, bacteria, viruses, pharmaceuticals, and environmental and food safety hazards. The devices are described as having small form factors and being able to detect targets through adaptable tests in a quantitative manner.
This document summarizes a presentation about sharing best teaching practices for biomedical engineering design teams. It discusses what constitutes best practices, including project results, succession planning, evaluations, and recognition. It then outlines the structure and goals of a biomedical engineering design team course at Johns Hopkins University, including forming student teams, selecting clinical problems to address, designing prototypes, testing, and redesigning. The course has led to successful startup companies and licensed technologies. The document emphasizes fostering relationships, empowering students, and critically evaluating programs.
Essay On Biodiversity (75. Online assignment writing service.Aliyahh King
1. John Deere Component Works (JDCW) faced unsuccessful competitive bids due to deficiencies in its existing costing system.
2. The report analyzes JDCW's current standard costing system and recommends Activity Based Costing (ABC) as a superior alternative.
3. ABC more accurately assigns overhead costs by tracing them to the activities that drive those costs, rather than relying on volume-based allocation rates as in the traditional system.
How to be recognized as a quality oa journal finalTom Olijhoek
The document provides an overview of assessing the quality of open access journals. It discusses the role of the Directory of Open Access Journals (DOAJ) compared to Scopus and Web of Science in evaluating quality. Key aspects of quality open access include adhering to the BOAI definition, using open licensing, and having a peer review process. Quality publishing involves best practices like transparency and editorial policies. While the Thomson Reuters Impact Factor is commonly used, it is an inappropriate measure of journal or article quality. New forms of impact assessment like altmetrics and relative citation scores provide better evaluations.
Stephen Gutkin founded Rete Communications in 1992 to provide biomedical writing and editorial services. Rete has a network of over 15 medical writers, editors, and peer reviewers with over 400 years of combined medical communications experience. They help pharmaceutical and biotech clients effectively communicate key findings to physicians, policymakers, and patients in a credible, balanced manner while avoiding regulatory issues. Rete shepherds projects from preclinical research through the product lifecycle, assisting with publications, presentations, and training materials.
The document discusses submission fees in open access journals. It summarizes the results of interviews with 40 journal editors, publishers, librarians and researchers about submission fee models. Some journals currently charge submission fees, citing advantages like improving quality and fairness. However, publishers are mixed in their support due to risks of deterring authors. Submission fees may be most suitable for high rejection rate journals if advantages outweigh disadvantages.
This document outlines an agenda for a presentation on safeguarding research in South Africa. It discusses challenges in scientific research like plagiarism. It introduces iThenticate and Crossref Similarity Check as tools to check for plagiarism. iThenticate checks submissions against billions of webpages and academic papers. Crossref Similarity Check is a version of iThenticate for Crossref members. The presentation provides feedback from users who found the tools help identify plagiarism and reduce misconduct. It concludes with tips on implementing Crossref Similarity Check in editorial workflows.
1) The document discusses sustainable design and chemical engineering, providing tools and guidance to help organizations build sustainability into their innovation processes.
2) It introduces the concept of life cycle thinking and analyzing the environmental impacts across a product's entire life cycle from raw materials to end of life.
3) Tools like life cycle assessment (LCA) are presented to help identify hotspots where the greatest environmental impacts occur in order to focus sustainability efforts.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
IHS Webcast - Counterfeiting, Obsolescence, and RiskTevia Arnold
This webcast covered counterfeiting, obsolescence, and supply chain risk. Attendees were encouraged to complete a survey at the end for a chance to win an Amazon Kindle. The speakers discussed predictive obsolescence and how applying predictive forecasting tools like life-cycle codes and years to end of life estimates can help mitigate the effects of component obsolescence. Examples of counterfeit incidents were provided and it was noted that over 50 counterfeiting incidents had been reported in the last 14 days. Best practices for avoiding supply chain risk through vetting suppliers and qualifying parts from trusted sources within the approved supply chain were also discussed.
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
Christina Dinh Nguyen, University of Toronto Mississauga Library
In the world of digital literacies, liaison and instructional librarians are increasingly coming to terms with a new term: algorithmic literacy. No matter the liaison or instruction subjects – computer science, sociology, language and literature, chemistry, physics, economics, or other – students are grappling with assignments that demand a critical understanding, or even use, of algorithms. Over the course of this session, we’ll discuss the term ‘algorithmic literacies,’ explore how it fits into other digital literacies, and see why it as a curriculum might belong at your library. We’ll also look at some examples of practical pedagogical methods you can implement right away, depending on what types of AL lessons you want to teach, and who your patrons are. Lastly, we’ll discuss how librarians should view themselves as co-learners when working with AL skills. This session seeks to bring together participants from across the different libraries, with diverse missions/vision/mandates, to explore ways we can all benefit from teaching AL. If time permits, we may discuss how text and data librarians (functional specialists) can support the development of this curriculum.
David Pride, The Open University
In this paper, we present CORE-GPT, a novel question- answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations.
Cath Dishman, Cenyu Shen, Katherine Stephan
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
This plenary panel will discuss the problems of “predatory” publishing and what, if anything, publishers, our community and researchers can do to try and help minimise their abundancy/impact.
eth Montague-Hellen, Francis Crick Institute, Katie Fraser, University of Nottingham
Open Access is a foundational topic in Scholarly Communications. However, when information professionals and publishers talk about its future, it is nearly always Gold open access we discuss. Green was seen as the big solution for providing access to those who couldn’t afford it. However, publishers have protested that Green destroys their business models. How true is this, and are we even all talking the same language when we talk about Green?
Chris Banks, Imperial College London, Caren Milloy, Jisc,
Transitional agreements were developed in response to funder policy and institutional demand to constrain costs and facilitate funder compliance. They have since become the dominant model by which UK research outputs are made open access. In January 2023, Jisc instigated a critical review of TAs and the OA landscape to provide an evidence base to inform a conversation on the desired future state of research dissemination. This session will discuss the key findings of the review and its impact on a sector-wide consultation and concrete actions in the UK and beyond.
Michael Levine-Clark, University of Denver, Jason Price, SCELC Library Consortium
As transformative agreements emerge as a new standard, it is critical for libraries, consortia, publishers, and vendors to have consistent and comprehensive data – yet data around publication profiles, authorship, and readership has been shown to be highly variable in availability and accuracy. Building on prior research around frameworks for assessing the combined value of open publishing and comprehensive read access that these deals provide, we will address multi-dimensional perspectives to the challenges that the industry faces with the dissemination, collection, and analysis of data about authorship, readership, and value.
Hylke Koers, STM Solutions
Get Full Text Research (GetFTR) launched in 2020 with the objective of streamlining discovery and access of scholarly content in the many tools that researchers use today, such as Dimensions, Semantic Scholar, Mendeley, and many others. It works equally well for open access content as it does for subscription-based content, providing researchers with recognizable buttons and indicators to get them to the most up-to-date version of content with minimal effort. Currently, around 30,000 OA articles are accessed every day via GetFTR links.
Gareth Cole, Loughborough University, Adrian Clark, Figshare
Researchers face more pressure to share their research data than ever before. Owing to a rise in funder policies and momentum towards more openness across the research landscape. Although policies for data sharing are in place, engagement work is undertaken by librarians in order to ensure repository uptake and compliance.
We will discuss a particular strategy implemented at Loughborough University that involved the application of conceptual messaging frameworks to engagement activities in order to promote and encourage use of our Figshare-powered repository. We will showcase the rationale behind the adoption of messaging frameworks for library outreach and some practical examples.
Mark Lester, Cardiff Metropolitan University
This talk will outline how a completely accidental occurrence led to brand new avenues for open research advocacy and reasons for being. This advocacy has occurred within student communities such as trainee teachers, student psychologists and (especially) those soon losing access to subscription-based library content. Alongside these new forms of advocacy, these ethical example of AI use cases has begun to form a cornerstone of directly connecting the work of the library to new technology.
Simon Bell, Bristol University Press
The UN SDG Publishers Compact, launched in 2020, was set up to inspire action among publishers to accelerate progress to achieve the Sustainable Development Goals by 2030, asking signatories to develop sustainable practices, act as champions and publish books and journals that will “inform, develop and inspire action in that direction”.
This Lightning Talk will discuss how our new Bristol University Press Digital has been developed as part of our mission to contribute a meaningful and impactful response to this call to action as well as the global social challenges we face.
Using thematic tagging to create uniquely curated themed eBook collections around the Global Social Challenges, Bristol University Press Digital responds directly to the need to provide the scholarly community access to a comprehensive range SDG focussed content while minimising time and resource at the institution end in collating content and maintaining collection relevance to rapidly evolving themes
Jenni Adams, University of Sheffield, Ric Campbell, University of Sheffield
Academic researchers are becoming increasingly aware of the need to make data and software FAIR in order to support the sharing and reuse of non-publication outputs. Currently there is still a lack of concise and practical guidance on how to achieve this in the context of specific data types and disciplines.
This presentation details recent and ongoing work at the University of Sheffield to bridge this gap. It will explore the development of a FAIR resource with specialist guidance for a range of data types and will examine the planned development of this project during the period 2023-25
TASHA MELLINS-COHEN
COUNTER & Mellins-Cohen Consulting, JOANNA BALL
DOAJ, YVONNE CAMPFENS
OA Switchboard,
ADAM DER, Max Planck Digital Library
Community-led organizations like DOAJ (Directory of Open Access Journals), COUNTER (the standard for usage metrics) and OA Switchboard (information exchange for OA publications) are committed to providing reliable, not-for-profit services and standards essential for a well-functioning global research ecosystem. These organizations operate behind the scenes, with low budgets and limited staffing – no salespeople, marketing teams, travel budgets, or in-house technology support. They collaborate with one another and with bigger infrastructure bodies like Crossref and ORCID, creating the foundations on which much scholarly infrastructure relies.
These organizations deliver value through open infrastructure, data and standards, and naturally services and tools have been built by commercial and not-for-profit groups that capitalize on their open, interoperable data and services – many of which you are likely to recognize and may use on a regular basis.
Hear from the Directors of COUNTER, DOAJ and OA Switchboard, as well as a library leader, on the role of these organizations, the challenges they face and why support from the community is essential to their sustainability.
CAMILLE LEMIEUX
Springer Nature
What is the current state of diversity, equity, and inclusion in the scholarly publishing community? It's time to take a thorough look at the 2023 global Workplace Equity (WE) Survey results. The C4DISC coalition conducted the WE Survey to capture perceptions, experiences, and demographics of colleagues working at publishers, associations, libraries, and many more types of organizations in the global community. Four key themes emerged from the 2023 results, which will be compared to the findings from the first WE Survey conducted in 2018. Recommendations for actions organisations can consider within their contexts will be proposed and discussed.
How to be recognized as a quality oa journal finalTom Olijhoek
The document provides an overview of assessing the quality of open access journals. It discusses the role of the Directory of Open Access Journals (DOAJ) compared to Scopus and Web of Science in evaluating quality. Key aspects of quality open access include adhering to the BOAI definition, using open licensing, and having a peer review process. Quality publishing involves best practices like transparency and editorial policies. While the Thomson Reuters Impact Factor is commonly used, it is an inappropriate measure of journal or article quality. New forms of impact assessment like altmetrics and relative citation scores provide better evaluations.
Stephen Gutkin founded Rete Communications in 1992 to provide biomedical writing and editorial services. Rete has a network of over 15 medical writers, editors, and peer reviewers with over 400 years of combined medical communications experience. They help pharmaceutical and biotech clients effectively communicate key findings to physicians, policymakers, and patients in a credible, balanced manner while avoiding regulatory issues. Rete shepherds projects from preclinical research through the product lifecycle, assisting with publications, presentations, and training materials.
The document discusses submission fees in open access journals. It summarizes the results of interviews with 40 journal editors, publishers, librarians and researchers about submission fee models. Some journals currently charge submission fees, citing advantages like improving quality and fairness. However, publishers are mixed in their support due to risks of deterring authors. Submission fees may be most suitable for high rejection rate journals if advantages outweigh disadvantages.
This document outlines an agenda for a presentation on safeguarding research in South Africa. It discusses challenges in scientific research like plagiarism. It introduces iThenticate and Crossref Similarity Check as tools to check for plagiarism. iThenticate checks submissions against billions of webpages and academic papers. Crossref Similarity Check is a version of iThenticate for Crossref members. The presentation provides feedback from users who found the tools help identify plagiarism and reduce misconduct. It concludes with tips on implementing Crossref Similarity Check in editorial workflows.
1) The document discusses sustainable design and chemical engineering, providing tools and guidance to help organizations build sustainability into their innovation processes.
2) It introduces the concept of life cycle thinking and analyzing the environmental impacts across a product's entire life cycle from raw materials to end of life.
3) Tools like life cycle assessment (LCA) are presented to help identify hotspots where the greatest environmental impacts occur in order to focus sustainability efforts.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
IHS Webcast - Counterfeiting, Obsolescence, and RiskTevia Arnold
This webcast covered counterfeiting, obsolescence, and supply chain risk. Attendees were encouraged to complete a survey at the end for a chance to win an Amazon Kindle. The speakers discussed predictive obsolescence and how applying predictive forecasting tools like life-cycle codes and years to end of life estimates can help mitigate the effects of component obsolescence. Examples of counterfeit incidents were provided and it was noted that over 50 counterfeiting incidents had been reported in the last 14 days. Best practices for avoiding supply chain risk through vetting suppliers and qualifying parts from trusted sources within the approved supply chain were also discussed.
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
Christina Dinh Nguyen, University of Toronto Mississauga Library
In the world of digital literacies, liaison and instructional librarians are increasingly coming to terms with a new term: algorithmic literacy. No matter the liaison or instruction subjects – computer science, sociology, language and literature, chemistry, physics, economics, or other – students are grappling with assignments that demand a critical understanding, or even use, of algorithms. Over the course of this session, we’ll discuss the term ‘algorithmic literacies,’ explore how it fits into other digital literacies, and see why it as a curriculum might belong at your library. We’ll also look at some examples of practical pedagogical methods you can implement right away, depending on what types of AL lessons you want to teach, and who your patrons are. Lastly, we’ll discuss how librarians should view themselves as co-learners when working with AL skills. This session seeks to bring together participants from across the different libraries, with diverse missions/vision/mandates, to explore ways we can all benefit from teaching AL. If time permits, we may discuss how text and data librarians (functional specialists) can support the development of this curriculum.
David Pride, The Open University
In this paper, we present CORE-GPT, a novel question- answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations.
Cath Dishman, Cenyu Shen, Katherine Stephan
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
This plenary panel will discuss the problems of “predatory” publishing and what, if anything, publishers, our community and researchers can do to try and help minimise their abundancy/impact.
eth Montague-Hellen, Francis Crick Institute, Katie Fraser, University of Nottingham
Open Access is a foundational topic in Scholarly Communications. However, when information professionals and publishers talk about its future, it is nearly always Gold open access we discuss. Green was seen as the big solution for providing access to those who couldn’t afford it. However, publishers have protested that Green destroys their business models. How true is this, and are we even all talking the same language when we talk about Green?
Chris Banks, Imperial College London, Caren Milloy, Jisc,
Transitional agreements were developed in response to funder policy and institutional demand to constrain costs and facilitate funder compliance. They have since become the dominant model by which UK research outputs are made open access. In January 2023, Jisc instigated a critical review of TAs and the OA landscape to provide an evidence base to inform a conversation on the desired future state of research dissemination. This session will discuss the key findings of the review and its impact on a sector-wide consultation and concrete actions in the UK and beyond.
Michael Levine-Clark, University of Denver, Jason Price, SCELC Library Consortium
As transformative agreements emerge as a new standard, it is critical for libraries, consortia, publishers, and vendors to have consistent and comprehensive data – yet data around publication profiles, authorship, and readership has been shown to be highly variable in availability and accuracy. Building on prior research around frameworks for assessing the combined value of open publishing and comprehensive read access that these deals provide, we will address multi-dimensional perspectives to the challenges that the industry faces with the dissemination, collection, and analysis of data about authorship, readership, and value.
Hylke Koers, STM Solutions
Get Full Text Research (GetFTR) launched in 2020 with the objective of streamlining discovery and access of scholarly content in the many tools that researchers use today, such as Dimensions, Semantic Scholar, Mendeley, and many others. It works equally well for open access content as it does for subscription-based content, providing researchers with recognizable buttons and indicators to get them to the most up-to-date version of content with minimal effort. Currently, around 30,000 OA articles are accessed every day via GetFTR links.
Gareth Cole, Loughborough University, Adrian Clark, Figshare
Researchers face more pressure to share their research data than ever before. Owing to a rise in funder policies and momentum towards more openness across the research landscape. Although policies for data sharing are in place, engagement work is undertaken by librarians in order to ensure repository uptake and compliance.
We will discuss a particular strategy implemented at Loughborough University that involved the application of conceptual messaging frameworks to engagement activities in order to promote and encourage use of our Figshare-powered repository. We will showcase the rationale behind the adoption of messaging frameworks for library outreach and some practical examples.
Mark Lester, Cardiff Metropolitan University
This talk will outline how a completely accidental occurrence led to brand new avenues for open research advocacy and reasons for being. This advocacy has occurred within student communities such as trainee teachers, student psychologists and (especially) those soon losing access to subscription-based library content. Alongside these new forms of advocacy, these ethical example of AI use cases has begun to form a cornerstone of directly connecting the work of the library to new technology.
Simon Bell, Bristol University Press
The UN SDG Publishers Compact, launched in 2020, was set up to inspire action among publishers to accelerate progress to achieve the Sustainable Development Goals by 2030, asking signatories to develop sustainable practices, act as champions and publish books and journals that will “inform, develop and inspire action in that direction”.
This Lightning Talk will discuss how our new Bristol University Press Digital has been developed as part of our mission to contribute a meaningful and impactful response to this call to action as well as the global social challenges we face.
Using thematic tagging to create uniquely curated themed eBook collections around the Global Social Challenges, Bristol University Press Digital responds directly to the need to provide the scholarly community access to a comprehensive range SDG focussed content while minimising time and resource at the institution end in collating content and maintaining collection relevance to rapidly evolving themes
Jenni Adams, University of Sheffield, Ric Campbell, University of Sheffield
Academic researchers are becoming increasingly aware of the need to make data and software FAIR in order to support the sharing and reuse of non-publication outputs. Currently there is still a lack of concise and practical guidance on how to achieve this in the context of specific data types and disciplines.
This presentation details recent and ongoing work at the University of Sheffield to bridge this gap. It will explore the development of a FAIR resource with specialist guidance for a range of data types and will examine the planned development of this project during the period 2023-25
TASHA MELLINS-COHEN
COUNTER & Mellins-Cohen Consulting, JOANNA BALL
DOAJ, YVONNE CAMPFENS
OA Switchboard,
ADAM DER, Max Planck Digital Library
Community-led organizations like DOAJ (Directory of Open Access Journals), COUNTER (the standard for usage metrics) and OA Switchboard (information exchange for OA publications) are committed to providing reliable, not-for-profit services and standards essential for a well-functioning global research ecosystem. These organizations operate behind the scenes, with low budgets and limited staffing – no salespeople, marketing teams, travel budgets, or in-house technology support. They collaborate with one another and with bigger infrastructure bodies like Crossref and ORCID, creating the foundations on which much scholarly infrastructure relies.
These organizations deliver value through open infrastructure, data and standards, and naturally services and tools have been built by commercial and not-for-profit groups that capitalize on their open, interoperable data and services – many of which you are likely to recognize and may use on a regular basis.
Hear from the Directors of COUNTER, DOAJ and OA Switchboard, as well as a library leader, on the role of these organizations, the challenges they face and why support from the community is essential to their sustainability.
CAMILLE LEMIEUX
Springer Nature
What is the current state of diversity, equity, and inclusion in the scholarly publishing community? It's time to take a thorough look at the 2023 global Workplace Equity (WE) Survey results. The C4DISC coalition conducted the WE Survey to capture perceptions, experiences, and demographics of colleagues working at publishers, associations, libraries, and many more types of organizations in the global community. Four key themes emerged from the 2023 results, which will be compared to the findings from the first WE Survey conducted in 2018. Recommendations for actions organisations can consider within their contexts will be proposed and discussed.
Rob Johnson, Research Consulting
Angela Cochran, American Society of Clinical Oncology
Gaynor Redvers-Mutton, Biochemical Society
Since 2015, the number of self-published learned societies in the UK has decreased by over a third, with the remaining societies experiencing real-term revenue declines. All around the world, society publishers are struggling with increased competition from commercial publishers and the rise of open access business models that reward quantity over quality. We will delve into the distinctive position of societies in research, examine the challenges confronting UK and US learned society publishers, and explore actionable steps for libraries and policymakers to support the continued relevance of learned society publishers in the evolving scholarly landscape.
Simon Bell, Clare Hooper, Katharine Horton, Ian Morgan
Over the last few years we have witnessed a seismic shift in the scholarly ecosystem. Three years since outset of the COVID pandemic and the establishment UN Publishers Compact, this is discussion-led presentation will look at how four UK Universities Presses have adopted a consultative and collaborative approach on projects to support their institutional missions, engage with the wider scholarly community while building on a commitment to make a meaningful difference to society.
This panel discussion will combine the perspectives of four UK based university presses, all with distinct identities and varied publishing programs drawn from humanities, arts and social sciences, yet with a shared recognition and value of the importance to collaborate and co-operate on a shared vision to support accessibility and inclusivity within the wider scholarly community and maintain a rich bibliodiversity.
While research support teams are generally small and specialist in nature, an increased demand of its service has been observed across the sector. This is particularly true for teaching-intensive institutions. As a pilot to expand research support across ARU library, the library graduate trainee was seconded to the research services team for a month. This dialogue between the former trainee and manager will discuss what the experience and outcomes of the secondment were from different perspectives. The conversation will also explore the exposure Library and Information Studies students have to research services throughout their degree.
TIM FELLOWS & EMILY WILD, Jisc
Octopus.ac is a UKRI funded research publishing model, designed to promote best practice. Intended to sit alongside journals, Octopus provides a space for researcher collaboration, recording work in detail, and receiving feedback from others, allowing journals to focus on narrative.
The platform removes existing barriers to publishing. It’s an entirely free, open space for researchers, without editorial and pre-publication peer review processes. The only requirement for authors is a valid ORCiD ID. Without barriers, Octopus must provide feedback mechanisms to ensure the community can self-moderate. During this session, we’ll explore Octopus’ aims to foster a collaborative environment and incentivise quality.
David Parker, Publisher and Founder, Lived Places Publishing
Dr. Kadian Pow, Lecturer in Sociology and Black Studies & LPP Author, Birmingham City University
Natasha Edmonds, Director, Publisher and Industry Strategy, Clarivate
Library patrons want to search for and locate authors by particular identity markers, such as gender identification, country of origin, sexual orientation, nature of disability, and the many intersectional points that allow an author to express a point-of-view. Artificial Intelligence, skilled web researchers, and data scientists in general struggle to achieve accuracy on single identity markers, such as gender. And what right does anybody have to affix identity metadata to an author other than the author theirselves? And what of the risks in disseminating author identity metadata in electronic distribution platforms and in library catalog systems? Can a "fully informed" author even imagine all the possible misuses of their identity metadata?
More from UKSG: connecting the knowledge community (20)
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
6. How common is misconduct?
Systematic review (screened 3207 papers)
Meta-analysis (18 studies)
• surveys of fabrication or falsification
• NOT plagiarism
2% admitted misconduct themselves
(95% CI 0.9-4.5)
14% aware of misconduct by others
(95% CI 9.9-19.7)
Fanelli PLoS One 2009;4(5):e5738
7. How often is misconduct detected?
PubMed retractions 0.02%
US Office of Research Integrity 0.01-0.001%
(ORI) (1 in 10,000 / 100,000 scientists)
Image manipulation 1%
in J Cell Biology (8/800)
FDA audit – investigators guilty 2%
of serious sci misconduct
8. Because major ethical
problems are (quite) rare
Editors don’t see many cases during their
term of office
Publishers looking after many journals can
provide ‘corporate memory’
AND
Editors are largely untrained
10. What should journals & publishers do?
Educate ?screen
Raise awareness ?discipline
Have clear policies
11. Tools for detecting misconduct
Anti-plagiarism software (eg eTBLAST,
CrossCheck, Turnitin)
Screening images (PhotoShop)
Chemical structure checks
Data review (digit preference)
12. CrossCheck
Based on iParadigms software
Compares text against publishers’ d-base
D-base run by CrossRef (doi system)
D-base currently contains 59,000 titles
Shows % concordance + source
Can exclude “quotes” and references
?False positives / ‘noise’ level
13. Image screening
Pioneered by J Cell Biology
Used in some life sciences journals
Important for research where
the image = the findings
• genetics / cell biology / radiography
Found 1%
Manual check using PhotoShop unacceptable
Requires editor time / expertise manipulation
Rossner & Yamada, JCB 2004;166:11-15
16. Chemical structure checks
Examined structure-factor files
Identified >70 bogus organic structures
Authors had taken a genuine structure and switched
metals (eg Fe / Cu) or chemical groups (CH2 / NH /
OH)
Editors note: “it is a concern and a disappointment
that these [chemically implausible or impossible
structures] passed into the literature”
>70 articles retracted
Acta Crystallographica 2010;E66:e1-2
17. Frequency Where to screen?
high
? yes
low
no ?
Severity
low high
Gross manipulation of blots. (A) Example of a band deleted from the original data (lane 3). (B) Example of a band added to the original data (lane 3).
Gross manipulation of blots. (A) Example of a band deleted from the original data (lane 3). (B) Example of a band added to the original data (lane 3).
This shows a timeline leading up to the current position 1980s – concern about publication bias started to come from people compiling systematic reviews (eg the Cochrane collaboration) 1986 – the first major paper calling for trial registration was by John Simes in the Journal of Clinical Oncology 1990 – a more influential paper was published in JAMA by Iain Chalmers (one of the founders of the Cochrane collaboration) In the same year (1990), Kay Dickersin published an important paper about risk factors for publication bias The study from Tramer et al in 1997 provided clear evidence that covert duplicate publication (in this case about GW's anti-emetic ondansetron) could bias the results of meta-analyses FDAMA (the FDA Modernization Act) came into force in late 1997 and clinicaltrials.gov was set up to register trials ( these will be covered in more detail in later slides) Glaxo Wellcome was one of the first drug companies to establish its own trial register (in 1999) – it was retrospective (i.e. included studies only after a product was licensed) and didn't survive the GSK merger The UK industry association (ABPI) created a register but it was largely ignored
In late 2004, the editors of several major journals announced that trial registration would be compulsory, and that trials had to be registered by Sept 15 th 2005 This graph clearly shows the effects of that deadline It's interesting to note that it's not only commercial but also academic studies being registered (The graph shows the number of NEW registrations per week at clinicaltrials.gov)