The document discusses how STM publishers will need to transition from the "digital library era" model to a "platform-as-a-service era" model to remain innovative and successful in the future. It provides examples of how certain scenarios like updating medical terms or integrating content across devices would be easier in the new model using standardized APIs and automated workflows rather than manual processes. The goals for publishers of the future are outlined as making it easy to work with and discover content fragments across channels and formats and leverage web standards to integrate and compose content in new ways.
Innovation and the STM publisher of the future (SSP IN Conference 2011)Bradley Allen
The document discusses how STM publishers are transitioning from the digital library era to the platform-as-a-service era. In the new era, content will be packaged as apps and APIs rather than books and articles. Publishers will focus on making all content types and delivery channels equally capable and flexible. They will aim to make it easy to discover, access, aggregate, and compose content fragments across all assets. The goal is to leverage web standards to increase integration and interoperability of content.
Semantics empowered Physical-Cyber-Social Systems for EarthCubeAmit Sheth
Presentation at the EarthCube Face Face-to-Face Workshop of Semantics & Ontologies Workgroup: April 30-May 1, 2012, Ballston, VA.
Workshop site: http://earthcube.ning.com/group/semantics-and-ontologies/page/workshops
For more recent material on this topic, see: http://wiki.knoesis.org/index.php/PCS
Open Source for Enterprise Search: Breaking Down the Barriers to InformationLucidworks (Archived)
This document summarizes a webcast on open source enterprise search solutions. It discusses how search is used in organizations for applications like intranet search, call centers, ecommerce, and analytics. It outlines the key components of search platforms like querying, filtering, indexing, language analysis, and visualization. Finally, it discusses the types of search products available from customizable point solutions to integrated platforms that support multiple data sources and applications.
This proposal outlines the development of a comprehensive information retrieval portal for Canadian scientific researchers. The portal would aggregate content from various sources and use techniques like collaborative filtering and content analysis to provide personalized search and recommendations. It would include features for user profiling, concept discovery, and interactive visualization of results. The proposal discusses forming partnerships with organizations to incorporate additional content and conducting a pilot program to evaluate the portal's usability and ability to improve search satisfaction and reuse.
This document discusses tools and strategies for managing digital assets in scholarly publishing. It provides a timeline of how digital assets have evolved from 50 years ago when content was paper-based to today's digital environment. It then examines the goals of digital tools in reducing costs, increasing efficiency and quality. As a case study, it outlines the benefits of using an online submission and peer review system in automating workflows and reporting. It concludes by considering future digital assets like interactive readers' commentaries and open document standards.
The document discusses the importance of metadata for publishers in the digital era. It defines metadata as "data about data" and explains that metadata has become critical for allowing computers and systems to communicate about content. Metadata impacts publisher processes by enabling content to reach the right audiences through various relationships and channels. The document provides examples of how metadata was essential for a drug reference product and a medical content provider to organize their content and drive various outputs. It emphasizes that metadata is just as, if not more, important than the raw content itself for digital publishing.
The document summarizes Roni Zeiger's presentation at the 2007 Annual Meeting of the Society for Scholarly Publishing. The presentation discussed Google's beta testing process, Google Co-op and Custom Search Engines programs, examples of custom search engines created for publishers, government agencies, industries, libraries and companies, and how Google designs experiments to determine what users want from search experiences.
Innovation and the STM publisher of the future (SSP IN Conference 2011)Bradley Allen
The document discusses how STM publishers are transitioning from the digital library era to the platform-as-a-service era. In the new era, content will be packaged as apps and APIs rather than books and articles. Publishers will focus on making all content types and delivery channels equally capable and flexible. They will aim to make it easy to discover, access, aggregate, and compose content fragments across all assets. The goal is to leverage web standards to increase integration and interoperability of content.
Semantics empowered Physical-Cyber-Social Systems for EarthCubeAmit Sheth
Presentation at the EarthCube Face Face-to-Face Workshop of Semantics & Ontologies Workgroup: April 30-May 1, 2012, Ballston, VA.
Workshop site: http://earthcube.ning.com/group/semantics-and-ontologies/page/workshops
For more recent material on this topic, see: http://wiki.knoesis.org/index.php/PCS
Open Source for Enterprise Search: Breaking Down the Barriers to InformationLucidworks (Archived)
This document summarizes a webcast on open source enterprise search solutions. It discusses how search is used in organizations for applications like intranet search, call centers, ecommerce, and analytics. It outlines the key components of search platforms like querying, filtering, indexing, language analysis, and visualization. Finally, it discusses the types of search products available from customizable point solutions to integrated platforms that support multiple data sources and applications.
This proposal outlines the development of a comprehensive information retrieval portal for Canadian scientific researchers. The portal would aggregate content from various sources and use techniques like collaborative filtering and content analysis to provide personalized search and recommendations. It would include features for user profiling, concept discovery, and interactive visualization of results. The proposal discusses forming partnerships with organizations to incorporate additional content and conducting a pilot program to evaluate the portal's usability and ability to improve search satisfaction and reuse.
This document discusses tools and strategies for managing digital assets in scholarly publishing. It provides a timeline of how digital assets have evolved from 50 years ago when content was paper-based to today's digital environment. It then examines the goals of digital tools in reducing costs, increasing efficiency and quality. As a case study, it outlines the benefits of using an online submission and peer review system in automating workflows and reporting. It concludes by considering future digital assets like interactive readers' commentaries and open document standards.
The document discusses the importance of metadata for publishers in the digital era. It defines metadata as "data about data" and explains that metadata has become critical for allowing computers and systems to communicate about content. Metadata impacts publisher processes by enabling content to reach the right audiences through various relationships and channels. The document provides examples of how metadata was essential for a drug reference product and a medical content provider to organize their content and drive various outputs. It emphasizes that metadata is just as, if not more, important than the raw content itself for digital publishing.
The document summarizes Roni Zeiger's presentation at the 2007 Annual Meeting of the Society for Scholarly Publishing. The presentation discussed Google's beta testing process, Google Co-op and Custom Search Engines programs, examples of custom search engines created for publishers, government agencies, industries, libraries and companies, and how Google designs experiments to determine what users want from search experiences.
The document summarizes a presentation given by the Director of Publications at ACM about the Association for Computing Machinery's (ACM) scholarly publishing program. It discusses how ACM adds value through its digital library rather than print, including sophisticated search tools and citation information. It notes that the digital library generates over $8 million annually, mostly from consortia and site licenses. However, ACM is exploring new business models like transactional pricing and advertising to address risks of relying on consortia revenue long-term. Significant future opportunities for growth are seen in emerging markets like China, India, and Africa.
PLoS ONE is the largest open access scientific mega journal published by the Public Library of Science. It uses an innovative editorial process that objectively evaluates scientific validity rather than perceived importance. Since launching in 2006, PLoS ONE has grown rapidly and now publishes over 10,000 articles per year, accounting for around 1.5% of the scientific literature. The success of PLoS ONE has led many major publishers to launch similar open access mega journals that aim to dominate entire fields through large volumes of publications.
This document discusses trends in professional publishing and evaluating new features. It notes that there is continuous pressure to add new features to attract readers and compete with other sites. However, the document advocates a strategic approach to evaluating new features by asking what problem it solves, if it will attract key audiences, and if readers will actually use it. It recommends a three stage screening process to evaluate features: 1) if it improves important metrics, 2) if readers would find and use it, and 3) if it is actually used. The key is focusing on features that drive traffic to content rather than just adding "new" features.
The document discusses the emerging e-book market and technologies. It notes that e-book readers have overcome usability issues, multiple formats have emerged as standards, and e-books have moved beyond novels to include other materials like blogs, magazines and newspapers. Major players in the e-book ecosystem include Amazon, Sony, and Smashwords, and the future of the industry remains uncertain but growth is rapid as prices of readers decrease and more content becomes available in digital form.
SAGE implemented a digital content repository using RSuite to automate workflows from manuscript acceptance to publication. The repository securely stores SAGE's online assets, delivers journal content, and enables analytics. It supports over 560 journals, 770,000 articles, and 70,000 issue deliveries annually. The repository provides flexibility, scalability, and worldwide access. It has processed over 100,000 issue deliveries with 99.5% uptime and will expand to additional content types.
The document summarizes Chuck Koscher's presentation on linking implementation at the Society for Scholarly Publishing in May 2003. The presentation covered refresher information on DOIs and CrossRef, statistics on CrossRef's growth, and new linking services including free DOI queries, parameter passing, forward linking, and handling ambiguous results.
Automatic semantic interpretation of unstructured data for knowledge managementtmra
The document summarizes a demo of an automatic semantic analysis technique for knowledge discovery from unstructured data like Wikipedia articles. The demo shows a linked concept graph and linked data graph created by analyzing astronomy articles. It also discusses how the technique can be used for knowledge representation, discovery, navigation, and intelligence by linking isolated data and deriving a taxonomy. The technical solution takes a bottom-up approach using semantic data integration and analysis to dynamically create and update object and concept graphs in real-time from various data sources.
This document provides an overview of the WeKnowIt project, which aims to develop an emerging collective intelligence system for personal, organizational, and social use. The project utilizes publicly available data from sources like Flickr to perform tasks such as image clustering, named entity detection, and event detection. It also outlines available datasets, planned expansions to capabilities, and contact information for the consortium members involved in the project.
Veda is an end-to-end semantic framework that originated from Fraunhofer Institute in Germany. It covers all aspects of a business solution including collecting structured and unstructured data, organizing it semantically, and presenting information through retrieval and analysis. Veda's product portfolio includes tools for extraction, classification, matching, and inference. It has been deployed in solutions for workflows, social media analytics, recruitment, and patent searching. Key differentiators of Veda include its end-to-end technology coverage, business application focus, implementation team, and commercial models.
Albert Simard - Mobilizing Knowledge: Acquisition, Analysis, and Action
Presentation at the Canadian Knowledge Mobilization Forum 2012, Ottawa, Ontario, http://www.kmbforum2012.org/
ECLAP Tutorial first part, ECLAP 2012 conference. the general overviewPaolo Nesi
The document provides an overview of the ECLAP project, which aims to create a social service portal and digital archive for performing arts content. It discusses the goals of providing high quality metadata and tools for libraries, education, and access across different devices. The ECLAP system will include services for content aggregation, semantic searching, recommendations, networking and distribution to partners like Europeana.
The document discusses the emergence of web-scale library platforms that move away from locally-housed systems towards globally shared platforms. These new library services platforms offer opportunities for libraries to operate less in isolated silos and more within broad, web-scale environments of highly shared data and unified workflows across physical, digital, and electronic collections. Discovery services have led the way towards this web-scale approach, and library management systems are now following a similar path.
This summary provides the key points from notes on a discussion about open educational resources (OERs):
1. Tracking use of OERs was discussed, including whether it is possible and worthwhile to track usage metrics like downloads, views, and patterns of production over time.
2. Usability of OER repositories for depositing, discovering, and using resources was a topic, along with issues like metadata, search interfaces, and barriers to access.
3. Streaming large files, bandwidth management, and bulk downloading of OERs were additional technical issues that were brought up.
4. Design processes and tools to support creators of OERs, as well as licensing, rights encoding, and ensuring
Stug-paf kiet 28 january live and on location-Enterprise Content Management Shakir Majeed Khan
The document outlines an agenda for the SharePoint Techies User Group meeting. It includes three session topics: Enterprise Content Management in SharePoint 2010, How to get ready for SharePoint 2010 development, and Windows Phone 7. It then provides more details on key features of Enterprise Content Management in SharePoint 2010, including taxonomy, document sets, document ID, and in place record management. Demonstrations are given on taxonomy and document sets. Contact information is provided for the user group leader.
The Information Workbench as a Self-Service Platform for Linked Data Applicat...Peter Haase
The document describes the Information Workbench, a self-service platform for developing linked data applications. The key points are:
1. Developing linked data applications is challenging due to issues like integrating diverse data sources and ensuring data and interface quality.
2. The Information Workbench addresses these challenges by providing semantics-based integration of public and private data sources, intelligent data access and analytics tools, and a collaborative authoring environment.
3. The platform uses a self-service model where users can provision instances in the cloud, discover and integrate relevant linked open data sources, customize interfaces using semantic widgets, and extend the platform with their own components.
Integrating digital traces into a semantic enriched dataDhaval Thakker
The document discusses integrating digital traces from social media into a semantic-enriched data cloud for informal learning. It outlines a processing pipeline that collects digital traces, semantically augments them using ontologies, and allows browsing and interaction through a semantic query service. An exploratory study on job interviews found that authentic examples from digital traces were useful learning stimuli but could be mistaken as norms without context. Semantic technologies provide opportunities to organize digital traces for informal learning but further work is needed to fully realize this potential.
Knowledge Base+: a Cloud-Based Community Knowledge Basesherif user group
Knowledge Base+: A cloud-based community knowledge base by Ben Showers, JISC. Presentation at the JIBS User Group Workshop and AGM Back to the Future and Into the Cloud, 24 February 2012, School of Oriental and African Studies, London.
The document discusses using managed metadata and taxonomies in SharePoint 2010. It provides an overview of metadata, taxonomy management, and content type hubs. It also describes how a company's information architecture grew organically over time without a taxonomy, leading to questions about where to store and find information. The presentation recommends using SharePoint's managed metadata service to provide a centralized taxonomy that can be consumed for navigation, search, and views to help organize an enterprise's information.
The Next-Generation SharePoint: Powered by Text Analytics Peter Wren-Hilton
This document discusses how text analytics can power the next generation of SharePoint. It begins by outlining common information tasks and how much time they take. It then discusses what text analytics is, how it works, and how it can save time on tasks like search, metadata extraction, and sentiment analysis. It provides examples of text analytics APIs and open source tools. It concludes by demonstrating how text analytics can be integrated into SharePoint using APIs.
The document summarizes a presentation given by the Director of Publications at ACM about the Association for Computing Machinery's (ACM) scholarly publishing program. It discusses how ACM adds value through its digital library rather than print, including sophisticated search tools and citation information. It notes that the digital library generates over $8 million annually, mostly from consortia and site licenses. However, ACM is exploring new business models like transactional pricing and advertising to address risks of relying on consortia revenue long-term. Significant future opportunities for growth are seen in emerging markets like China, India, and Africa.
PLoS ONE is the largest open access scientific mega journal published by the Public Library of Science. It uses an innovative editorial process that objectively evaluates scientific validity rather than perceived importance. Since launching in 2006, PLoS ONE has grown rapidly and now publishes over 10,000 articles per year, accounting for around 1.5% of the scientific literature. The success of PLoS ONE has led many major publishers to launch similar open access mega journals that aim to dominate entire fields through large volumes of publications.
This document discusses trends in professional publishing and evaluating new features. It notes that there is continuous pressure to add new features to attract readers and compete with other sites. However, the document advocates a strategic approach to evaluating new features by asking what problem it solves, if it will attract key audiences, and if readers will actually use it. It recommends a three stage screening process to evaluate features: 1) if it improves important metrics, 2) if readers would find and use it, and 3) if it is actually used. The key is focusing on features that drive traffic to content rather than just adding "new" features.
The document discusses the emerging e-book market and technologies. It notes that e-book readers have overcome usability issues, multiple formats have emerged as standards, and e-books have moved beyond novels to include other materials like blogs, magazines and newspapers. Major players in the e-book ecosystem include Amazon, Sony, and Smashwords, and the future of the industry remains uncertain but growth is rapid as prices of readers decrease and more content becomes available in digital form.
SAGE implemented a digital content repository using RSuite to automate workflows from manuscript acceptance to publication. The repository securely stores SAGE's online assets, delivers journal content, and enables analytics. It supports over 560 journals, 770,000 articles, and 70,000 issue deliveries annually. The repository provides flexibility, scalability, and worldwide access. It has processed over 100,000 issue deliveries with 99.5% uptime and will expand to additional content types.
The document summarizes Chuck Koscher's presentation on linking implementation at the Society for Scholarly Publishing in May 2003. The presentation covered refresher information on DOIs and CrossRef, statistics on CrossRef's growth, and new linking services including free DOI queries, parameter passing, forward linking, and handling ambiguous results.
Automatic semantic interpretation of unstructured data for knowledge managementtmra
The document summarizes a demo of an automatic semantic analysis technique for knowledge discovery from unstructured data like Wikipedia articles. The demo shows a linked concept graph and linked data graph created by analyzing astronomy articles. It also discusses how the technique can be used for knowledge representation, discovery, navigation, and intelligence by linking isolated data and deriving a taxonomy. The technical solution takes a bottom-up approach using semantic data integration and analysis to dynamically create and update object and concept graphs in real-time from various data sources.
This document provides an overview of the WeKnowIt project, which aims to develop an emerging collective intelligence system for personal, organizational, and social use. The project utilizes publicly available data from sources like Flickr to perform tasks such as image clustering, named entity detection, and event detection. It also outlines available datasets, planned expansions to capabilities, and contact information for the consortium members involved in the project.
Veda is an end-to-end semantic framework that originated from Fraunhofer Institute in Germany. It covers all aspects of a business solution including collecting structured and unstructured data, organizing it semantically, and presenting information through retrieval and analysis. Veda's product portfolio includes tools for extraction, classification, matching, and inference. It has been deployed in solutions for workflows, social media analytics, recruitment, and patent searching. Key differentiators of Veda include its end-to-end technology coverage, business application focus, implementation team, and commercial models.
Albert Simard - Mobilizing Knowledge: Acquisition, Analysis, and Action
Presentation at the Canadian Knowledge Mobilization Forum 2012, Ottawa, Ontario, http://www.kmbforum2012.org/
ECLAP Tutorial first part, ECLAP 2012 conference. the general overviewPaolo Nesi
The document provides an overview of the ECLAP project, which aims to create a social service portal and digital archive for performing arts content. It discusses the goals of providing high quality metadata and tools for libraries, education, and access across different devices. The ECLAP system will include services for content aggregation, semantic searching, recommendations, networking and distribution to partners like Europeana.
The document discusses the emergence of web-scale library platforms that move away from locally-housed systems towards globally shared platforms. These new library services platforms offer opportunities for libraries to operate less in isolated silos and more within broad, web-scale environments of highly shared data and unified workflows across physical, digital, and electronic collections. Discovery services have led the way towards this web-scale approach, and library management systems are now following a similar path.
This summary provides the key points from notes on a discussion about open educational resources (OERs):
1. Tracking use of OERs was discussed, including whether it is possible and worthwhile to track usage metrics like downloads, views, and patterns of production over time.
2. Usability of OER repositories for depositing, discovering, and using resources was a topic, along with issues like metadata, search interfaces, and barriers to access.
3. Streaming large files, bandwidth management, and bulk downloading of OERs were additional technical issues that were brought up.
4. Design processes and tools to support creators of OERs, as well as licensing, rights encoding, and ensuring
Stug-paf kiet 28 january live and on location-Enterprise Content Management Shakir Majeed Khan
The document outlines an agenda for the SharePoint Techies User Group meeting. It includes three session topics: Enterprise Content Management in SharePoint 2010, How to get ready for SharePoint 2010 development, and Windows Phone 7. It then provides more details on key features of Enterprise Content Management in SharePoint 2010, including taxonomy, document sets, document ID, and in place record management. Demonstrations are given on taxonomy and document sets. Contact information is provided for the user group leader.
The Information Workbench as a Self-Service Platform for Linked Data Applicat...Peter Haase
The document describes the Information Workbench, a self-service platform for developing linked data applications. The key points are:
1. Developing linked data applications is challenging due to issues like integrating diverse data sources and ensuring data and interface quality.
2. The Information Workbench addresses these challenges by providing semantics-based integration of public and private data sources, intelligent data access and analytics tools, and a collaborative authoring environment.
3. The platform uses a self-service model where users can provision instances in the cloud, discover and integrate relevant linked open data sources, customize interfaces using semantic widgets, and extend the platform with their own components.
Integrating digital traces into a semantic enriched dataDhaval Thakker
The document discusses integrating digital traces from social media into a semantic-enriched data cloud for informal learning. It outlines a processing pipeline that collects digital traces, semantically augments them using ontologies, and allows browsing and interaction through a semantic query service. An exploratory study on job interviews found that authentic examples from digital traces were useful learning stimuli but could be mistaken as norms without context. Semantic technologies provide opportunities to organize digital traces for informal learning but further work is needed to fully realize this potential.
Knowledge Base+: a Cloud-Based Community Knowledge Basesherif user group
Knowledge Base+: A cloud-based community knowledge base by Ben Showers, JISC. Presentation at the JIBS User Group Workshop and AGM Back to the Future and Into the Cloud, 24 February 2012, School of Oriental and African Studies, London.
The document discusses using managed metadata and taxonomies in SharePoint 2010. It provides an overview of metadata, taxonomy management, and content type hubs. It also describes how a company's information architecture grew organically over time without a taxonomy, leading to questions about where to store and find information. The presentation recommends using SharePoint's managed metadata service to provide a centralized taxonomy that can be consumed for navigation, search, and views to help organize an enterprise's information.
The Next-Generation SharePoint: Powered by Text Analytics Peter Wren-Hilton
This document discusses how text analytics can power the next generation of SharePoint. It begins by outlining common information tasks and how much time they take. It then discusses what text analytics is, how it works, and how it can save time on tasks like search, metadata extraction, and sentiment analysis. It provides examples of text analytics APIs and open source tools. It concludes by demonstrating how text analytics can be integrated into SharePoint using APIs.
The Next Generation SharePoint: Powered by Text AnalyticsAlyona Medelyan
This document discusses how text analytics can power the next generation of SharePoint. It begins by outlining common information tasks and how much time they take. It then discusses what text analytics is, how it works, and how it can save time on tasks like search, metadata extraction, and sentiment analysis. It provides examples of text analytics APIs and open source tools. It concludes by demonstrating how text analytics can be integrated into SharePoint using APIs.
MeshLabs is a pure-play developer of text analytics software. Our core product is a hybrid text analytics engine, that combines linguistic (NLP), statistic, and semantic approaches to process large volumes of unstructured and structured content. Built to enterprise performance standards, the engine offers flexible integration capabilities including content connectors and APIs. We are a team of information retrieval professionals who are passionate about solving complex unstructured data processing problems for a variety of industries. Our product is deployed at large enterprises globally. We specialize in developing products using emerging content processing technologies to solve complex customer experience management problems. I can discuss with you specific ideas, best practices, and case studies.
This document summarizes a webcast about using managed metadata and taxonomies in SharePoint 2010. It discusses metadata and taxonomy definitions and usage scenarios. It covers using folksonomies, taxonomy management, tags for social networking, content type hubs, and configuration tips. The presentation includes demos of adding managed keywords to libraries, tagging documents, using metadata for navigation and search, and administering term sets and metadata fields in the user interface. It provides best practices for design including using shared service applications and considering physical and logical design.
The document provides information about NUDT (National University of Defense Technology) and its Trustie project.
NUDT is a top computer science school in China with over 40 years of experience. The Trustie project aims to create a collaborative software development platform and environment for sharing reusable software assets. It provides tools for software production lines, resource management, trust evaluation, and an integrating framework. The Trustie community involves many universities and companies in China. Applications have been developed in various domains like industrial software, avionics, and power systems using the Trustie platform. NUDT also collaborates with the OW2 open source community.
This document provides guidance for vendors responding to a request for proposal (RFP). It outlines the key steps, which include reading the RFP thoroughly, establishing win themes in an internal kickoff meeting, collecting questions, framing the response, ensuring proper grammar, conducting an internal review, submitting before the deadline, preparing for presentations as an assembled team with rehearsal, taking nothing for granted by being overly prepared, negotiating if selected, celebrating the outcome, and conducting a post-mortem review.
The document discusses the request for proposal (RFP) process. It defines an RFP as an invitation for vendors to submit proposals to provide goods or services to an organization. The document outlines the key steps in the RFP process, including assessing needs, preparing and distributing the RFP, evaluating proposals, conducting presentations, and negotiating contracts. It provides guidance on elements to include in an RFP, questions to ask vendors, tips for evaluating proposals and presentations, and best practices for negotiations.
This document discusses the RFP (Request for Proposal) process. It begins by outlining when an RFP may be needed, such as when a contract is up for renewal or there are issues with the current vendor. It then discusses selecting a consultant to manage the RFP process if desired. The document outlines the consultant's role in defining needs, identifying vendors, developing the RFP, managing communications and evaluations. Key aspects of the RFP are described like requirements, expectations and allowing vendor questions. The proposal, demo and contract phases are also summarized. The goal is to have a smooth transition to the new vendor selected through this competitive process.
This document provides guidance on executing a successful RFP (request for proposal) process. It begins by outlining when an RFP is the right tool and when it may not be suitable. When scope is unclear or requirements are not well defined, a project charter can help determine the best path forward. The document emphasizes treating the RFP as a process, not just a document, with clear communication and sufficient time allotted. It also provides tips on prioritizing requirements, evaluating differentiators between vendors, negotiating contracts, and determining when to engage a consultant.
This document summarizes a seminar on networking for career development. The speaker has over 24 years of experience in strategy, sales, legal, and business development. They will discuss their experiences as a mentee, peer, and mentor. Networking is defined as developing business opportunities through referrals and introductions in person or online to build enduring relationships. The speaker will discuss why networking and mentoring are important for meeting people in your field, learning industry dynamics, and finding new opportunities. They will provide tips on how to network strategically including starting with goals, focusing on personal connections, using professional societies and social networks, and maintaining a long-term perspective. Contact details are provided for anyone seeking mentoring advice.
Elizabeth Demers is a senior acquisitions editor at Johns Hopkins University Press with 20 years of experience in academic and trade publishing. She signs 20-30 books per year, including monographs, trade titles, and course adoption books. She commissions new books, evaluates submitted manuscripts, provides developmental edits, and attends conferences to promote books and the press. Her talk discusses strategies for networking to build professional connections in two areas: building her book list through conferences, outreach, and social media; and finding future career opportunities by getting involved in the industry and being generous with her time and recommendations.
Angela Cochran is a director, mother, wife, daughter, and volunteer leader who advocates for networking through volunteering and active participation. She recommends getting involved in committees and leadership roles to meet people, learn negotiation and collaboration skills, and gain experience in governance. Cochran also suggests attending professional events to ask questions, start conversations, exchange business cards, contribute online, and speak up so others realize your knowledge and potential to contribute.
Digital Science's mission is to fuel scientific discovery with software that simplifies research. They aim to empower researchers with disruptive technology. They incubate and invest in startups in the research field, with the goal of making research simpler so researchers have more time for discovery. Digital Science is a technology company that serves the needs of scientific research by changing the way science works.
The document discusses diversity and inclusion in mentorship at the American Society of Civil Engineers (ASCE). It describes the ASCE Diversity & Inclusion Council established in 2014 with a mission to foster understanding and cultivate an inclusive workforce. The council has 13 members from different departments, designations, races, ethnicities, and genders. It also works with a separate committee for ASCE's over 150,000 members from 177 countries. Activities to promote diversity include highlighting heritage months, lunch-and-learn sessions on topics like disability etiquette and working styles, and inviting outside speakers on bias. Mentorship can be formal or informal and aims to bridge gaps in skills, self-awareness, and confidence through
The Mentorship Program at T&F was created in 2010 based on employee feedback requesting guidance and support from experienced employees. The program is informal with 1:1 mentoring relationships lasting 6-12 months between employees in different divisions. Over 70 matches have been made in 5 years with only 2 not working out. Benefits include 20% of participants being promoted, 10% transferring, and under 5% turnover. The program increased employee engagement and led to improved productivity and cost savings.
This document discusses mentoring at the American Society of Civil Engineers (ASCE). It provides details about the pilot mentoring program launched in 2014 and the full program launched in 2015. Key points include pairing mentees and mentors, providing training and guidelines, and collecting feedback. The program aimed to facilitate a culture shift at ASCE to emphasize core values like trust, teamwork and excellence. Lessons learned include ensuring mentors and mentees are a good match and maintaining expectations. The author provides their own experience being paired as a mentor and mentee.
The document discusses advice and mentorship. It presents a series of fictional scenarios where a person seeks advice at different career stages and receives both helpful and unhelpful advice. It then provides recommendations for finding mentors and making the most of advice received, such as looking across different fields, mentoring others, and remembering that not all advice should be followed. The overall message is that while advice can be good or bad, it is still useful to consider different perspectives to help advance one's career.
October Ivins has worked in various library and information science roles since 1985, including positions at UNC Chapel Hill Library, LSU Baton Rouge Library, and UT Austin. She has been involved with professional organizations like ALA, NASIG, and SSP since 1981. As an independent consultant since 2001, Ivins mentors others on career development topics such as getting the most out of conferences, choosing positions, supervisor and coworker issues, and professional associations. Her document provides advice on training opportunities, managing staff, getting referrals, and preparing for phone interviews.
Early in one's career, a formal mentor is not necessary as support can be found from observing mid-to-late career colleagues. Peer mentoring through collaboration with other managers, especially other women managers, can also be effective. As careers advance, having a women mentor becomes important as women face unique challenges in the workplace and mentors help other women navigate their careers. Without any mentor, one risks lacking career advice, feeling stagnant in their career progression, and experiencing periods of career confusion with no expert to provide guidance.
Adrian Stanley discussed his experience mentoring fellows through the SSP program. He explained that mentoring involves softer guidance to help mentees develop over the long term through balanced listening, directing, and connecting. Fellows benefit from the experience and connections of mentors, who can help open doors, share new perspectives, and make introductions to expand networks and opportunities in the industry. Feedback from fellows showed mentoring helped them learn from experience, feel more included and secure asking questions, and broaden their industry perspectives.
The document discusses two kinds of mentorship at the nonprofit organization BioOne. It provides an overview of BioOne's mission to make scientific research more accessible and its founding by both library and publisher interests. It then defines a "culture of mentorship" as a work environment where employees feel comfortable getting advice from supervisors and colleagues, who see them as whole people rather than just skills. The second kind of mentorship is described as a more traditional unofficial mentor who provides professional guidance. It concludes by listing the executive staff of BioOne and contact information for the speaker.
This document provides a summary of October Ivins' career experience and areas of expertise. It lists her educational background, including degrees from UNC Chapel Hill Library in 1974-1985, UNC Chapel Hill SILS in 1985-1987, and LSU Baton Rouge Library in 1987-1995. It also outlines her work experience at UT Austin SILS from 1995-1998, Publist.com from 1998-2000, Booktech.com from 2000-2001, and as an independent consultant from 2001-present. The document then discusses how her definition of an information professional has loosened over time to include various managerial roles. It concludes by listing topics she provides career coaching and mentoring on, such as choosing jobs
Mohammad H Asadi Lari presented on creating an office culture of mentorship from the perspective of an early career student and mentee. He discussed his experiences being mentored through the SSP Fellowship program and beyond. Emerging trends in early career mentorship include more organizations introducing formal mentorship opportunities and an increase in both professional and peer mentoring models. Mentorship provides visible benefits like networking and career development, as well as hidden benefits beyond initial programs.
This document discusses opportunities for Western academic publishers in China. It notes that China is a rapidly growing market with increasing research output and funding. However, it is also highly competitive. The document outlines several strategies publishers can consider to engage with the Chinese market, including developing local language materials, using social media platforms allowed in China, attending Chinese conferences, exploring co-publishing opportunities with Chinese partners, and developing a long-term strategic plan focused on impact and relationships within China. It also discusses China's increasing open access policies and investments in research universities that could affect publishing opportunities.
This document discusses JSTOR's growing participation in Turkey from 1999-2014. It shows that participation grew slowly at first but increased significantly after the Turkish government began funding access to JSTOR collections through the Anatolian University Libraries Consortium in 2005. Participation and number of collections licensed continued to grow steadily through partnerships with the consortium and engaging a licensing agent in 2013. While agents can help with local representation, awareness, and relationships, they also present challenges of managing expectations, competing demands, and individuals not reporting to JSTOR.
1. Innovation and the STM publisher of
the future
Bradley P. Allen, Elsevier Labs
Innovation Session, SSP IN Conference 2011
Arlington, VA, USA
2011-09-19
2. Peak physical media
• “Music Sales”, New York Times, 1 August 2009.
http://www.nytimes.com/imagepages/2009/08/01/opinion/01blow.ready.html
• “Initial Circs per student”, William Denton, 31 January 2011.
http://www.miskatonic.org/2011/01/31/initial-circs-student
• “Rise of e-book Readers to Result in Decline of Book Publishing Business”, Steven
Mather, iSuppli, 28 April 2011. http://www.isuppli.com/Home-and-Consumer-
Electronics/News/Pages/Rise-of-e-book-Readers-to-Result-in-Decline-of-Book-
Publishing-Business.aspx 2
3. A simple model of the evolution of publishing
Print era: 1600s - Digital Library era: Platform-as-a-
1980 1980 – 2010s Service era: 2010s
• Packaged as • Packaged as • Packaged as
books and books and apps and APIs
articles articles • Digitally
• Physically • Digitally distributed
distributed distributed • Access and
• Access and • Access and discovery
discovery discovery through social
through through search networks
libraries engines
3
4. Facets of STM publishing in the PaaS era
Process Type
Extract, Load
Discovery and
Acquisition and Enhancement Indexing Composition Delivery
Access
Transform
Entity Activity Content Type
Submitting Entity extraction
Author Product catalog Article
Crawling Fact extraction
Supplier Editor Book
Syndicating Clustering
Web site Reviewer Media object
Formatting Aggregating
Typesetter User Entity record
Mapping Ordering
Automated process Designer Asset metadata
Cleansing Summarizing
Subject matter expert Developer Relational metadata
Indexing Filtering
Search engine E-book Provenance metadata
Querying Analysis
Content repository Mobile app Usage metadata
Updating Data science
Entity registry Mobile-enhanced Web site Taxonomy
Storing Rendering
API Ontology
Annotating Design
User-generated content
Subject tagging Publishing
Classification Accessing
Entity recognition Retrieving
Deleting
4
5. STM publishing as business intelligence
Surajit Chaudhuri, Umeshwar Dayal, and Vivek Narasayya. 2011. An overview of business intelligence technology. Commun.
ACM 54, 8 (August 2011), 88-98. http://doi.acm.org/10.1145/1978542.1978562
5
6. Some scenarios to compare the two digital eras
Scenario Digital Library era Platform-as-a-service era
A new medical term relevant to an emerging Organizational governance issues about how A single, automated and standardized
healthcare issue (e.g. a new type of avian flu taxonomies are be updated, coupled with taxonomy management and content
virus) needs to be incorporated into a search manually-intensive workflows and ad-hoc enhancement workflow allows rapid and
index immediately approaches to content tagging, inhibit rapid timely update of search applications
response
Application developers want to mash up Data silos without easy means of Content API and single-point-of-access
epidemiological data with medical journal programmatic access by developers, coupled repository allow data and content to be
articles to create topic-specific Web resource with governance and business model accessed, discovered and reused across
questions , inhibit data reuse multiple applications
Digital library developers want to stage Duplication of core content leads to Consolidation of duplicate repositories into a
content into single repository for unified synchronization, quality control issues single point of truth across all content
search index generation accessible and discoverable through a
Content API eliminates the need for
duplication and synchronization
Third party solutions providers want to No standards, no APIs for point-of-care Standards and APIs that scale across multiple
integrate content (e.g. tagged medical journal content integration across all content and partners, for all content types, for all delivery
articles, medical taxonomies) into point-of- data formats
care solutions
Publishers want to deliver their content to No clear standard or approach for targeting Web- and industry-standards for eReader,
tablets and e-readers in delivery formats that emerging eReader, tablet devices, multiple tablet devices supported as part of standard
take advantage of the displays and interaction and divergent approaches leading to siloed automated processing into delivery channel-
modalities on those devices solutions, duplication of effort specific formats, regularly updated and
exposed through a Content API
Journal publisher wants to integrate content No single point of access to content Easy access to multiple opportunities for
enhancements across multiple subject matter enhancements, no standards for content content enhancements embedded in
areas to add value to products leveraging enhancement suppliers and partners to standard next-generation article formats and
Article of the Future technology deliver enhancements for integration provided using standard content
enhancement formats
6
7. Goals for the publisher of the future
• Craft content acquisition, production and management
systems that support with equal capability and flexibility a
broad range of content types and delivery channels
• Make it easy for authors, editors and reviewers to work
with bundles of content and data in the aggregate
• Make it easy to discover and access, across all content
assets, information in fragments smaller than the unit of
publication
• Then make it easy to aggregate and compose these
fragments into new products and services
• Leverage the tremendous power of Web architectural
standards and formats to increase the ease of content
integration and interoperability
7
8. New requirements for content management
• Broad range of content types • Accessible
– Must treat as first-class objects video, audio, – Must be easily accessed through content
images, datasets, metadata and knowledge creation, retrieval, update and deletion (CRUD)
organization systems in addition to articles and services
books
• Flexible
• Standards-based – New content types and associated schemas
– Web-standard formats to support ease of must be easily added through configuration
integration and interoperability
• Reusable
• Fine-grained – It must be efficient for product developers to
– Must be decomposable into and addressable in aggregate and compose content fragments into
fragments smaller than the unit of publication; new products
e.g., down to the level of specific words,
phrases, images, table cells in articles or book
• Modifiable
chapters, key frames and segments in videos – Support the enhancement and correction of
content at any time following creation
• Discoverable
– Must be easily located across all levels of
• Broad range of delivery formats
granularity, – Content standards and services must support
fulfillment, delivery and presentation across
desktop, notebook, tablet and mobile
computing devices
8
9. Leveraging Web standards for sharing
1. Use URIs to name things
2. Use HTTP URIs so they can
be looked up
3. Return useful data when
things are looked up
4. Include links to other things
in the returned data
“Linked data is just a term for how to publish
data on the web while working with the
web. And the web is the best
architecture we know for publishing
information in a hugely diverse and
distributed environment, in a gradual
and sustainable way.”
Tennison J, 2010. Why Linked Data for data.gov.uk?
http://www.jenitennison.com/blog/node/140
Shotton D, Portwin K, Klyne G, Miles A, 2009. Adventures in Semantic Publishing:
Exemplar Semantic Enhancements of a Research Article. PLoS Comput Biol 5(4):
e1000361. doi:10.1371/journal.pcbi.1000361
9
10. From books and articles to evolving research objects
Linked data
Relational
metadata
Entity record
Relational
Metadata
Article Relational
metadata
Relational
Acquire Metadata Relational Deliver
metadata
Media object
Relational Relational
metadata
Metadata
Transform,
Enhance, Compose
10
11. Leveraging consumer Web innovations
• Emergent technologies driven by consumer Web applications
emphasize design choices that focus on delivering cheap, robust
and scalable Web applications
– Schemaless document stores provide read/write at Web scale with
support for analytics
• For more dynamic, fine-grained content and linked data
• For easier usage and citation analysis, bibliometrics and scientometrics
– Web application development frameworks that leverage HTML5/CSS/JS
to deliver across desktops, notebooks, tablets and smartphones
– Deploying in the cloud and moving scale-out from development to
operations to reduce time-to-market, cost of failure for emerging, niche
publishing opportunities
• As we shift to the Platform-as-a-Service era, these features
become an important part of the STM publishing technology stack
11
15. The publisher of the future as lean startup
• This stuff is not just for big publishers
• These are the tools that new consumer
Internet businesses are using to create new
products and services today… quickly and on
the cheap
• Smaller publishers and societies can use lean
startup techniques to drive app and API
design and development starting from
existing web presences and third-party APIs
15
19. Challenges for the publisher of the future
• When content can be mashed up at a fine-level
of granularity using multiple third-party APIs,
what are the rights associated with the resulting
product? What are the appropriate business
models?
• What standards should there be for research
objects?
• Who gets credit for research objects? How is
impact determined and reputation managed?
• What is an acceptable trade off between content
flexibility and high-touch presentation design?
19
20. In summary
• STM publishing is only beginning the transition
from print to online
• Articles and books are no longer sufficient
containers for scholarly communication
• Tools to effect this change come from the
consumer Internet and the business intelligence
worlds
• Publishers of the future will leverage the best
practices emerging around these tools to create
innovative new products to serve their
communities
20