This document outlines the Make Data Count (MDC) initiative to standardize and promote the tracking of research data usage metrics. MDC has developed a Code of Practice for data usage logs, built an open hub to aggregate standardized usage data, and implemented tracking and display of usage metrics at their own repositories. They encourage other repositories to follow five simple steps to Make Their Data Count: 1) Read the Code of Practice, 2) Process usage logs, 3) Send logs to the hub, 4) Pull usage metrics from the hub, and 5) Display metrics. Future work includes outreach, iteration on implementations, and expanding metrics beyond DOIs.
The Scholix Framework and the OpenAIRE Scholexplorer Service (OpenAIRE webina...OpenAIRE
Presentation from the OpenAIRE webinar on "Scholix guidelines for data-literature integration: opportunities for OpenAIRE compatible repositories", by Paolo Manghi (CNR-ISTI), December 5, 2017.
The Scholix Framework and the OpenAIRE Scholexplorer Service (OpenAIRE webina...OpenAIRE
Presentation from the OpenAIRE webinar on "Scholix guidelines for data-literature integration: opportunities for OpenAIRE compatible repositories", by Paolo Manghi (CNR-ISTI), December 5, 2017.
Denodo DataFest 2016: What’s New in Denodo Platform – Demo and RoadmapDenodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/ptGwp7
Curious about product roadmap? In this session, we will review some of the new key features introduced this year in the Denodo Platform in areas such as performance, self-service, security and monitoring. We will also take a sneak peek at the most exciting features in the roadmap for Denodo 7.0.
In this session, you will learn:
• New performance-related features in big data scenarios
• New governance and self-service features
• New connectivity, data transformation, and enterprise-wide deployment features
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
CLOUD COMPUTING AND ITS APPLICATIONS IN DIGITAL LIBRARY SERVICESKoushik Pathak
WWW Service model, BBS and e-mail service model. On-site backup pros, on-site backup cons, cloud-based backup pros, cloud-based backup cons use for backup for digital data. User can access digital library from anywhere and anytime with the help of cloud computing. Cloud Computing architecture comprises of many cloud components, which are loosely coupled.
Accelerating Delivery of Data Products - The EBSCO WayMongoDB
EBSCO Information Services (EBSCO) is the leading provider of electronic journals, magazines, eBooks, audioBooks, and online research content for libraries, including hundreds of research databases, historical archives, point-of-care medical reference, and corporate learning tools serving millions of end users at tens of thousands of institutions worldwide. The EBSCO platform is a widely used platform serving the needs of researchers at all levels in academic institutions, schools, public libraries, hospitals, medical institutions, corporations and government institutions. Data is our business, and delivering new products quickly is our competitive advantage. We build hundreds of data products and accelerating the analysis, transformation of new datasets translates to revenue and competitiveness. And since our data is so varied, using MognoDB to store data flexibly and JSON Studio to analyze this data allows us to deliver products to market faster. In this session we will describe this process that helped us expedite delivery of new datasets, and give real examples of how data is used, analyzed and processed.
Slides from March1st 2018 webinar
Tracking research data footprints via integration with Research Graph
Presented by Ben Evans and Jingbo Wang from NCI
Polyglot Persistence and Database Deployment by Sandeep Khuperkar CTO and Dir...Ashnikbiz
This presentation covers What is Polyglot Persistence? And how should you choose the right Database Technology for a scalable architecture and introduction to Emerging world of Polyglot Persistence using open source database ecosystem.
Polyglot Persistence is not something which can be used as an out of the box product, but instead needs to be designed for each individual enterprise for its unique Data Architecture.
Collaboration is crucial to today’s workforce. Whether you are in a traditional office setting, work from home or travel extensively, there are tools needed to achieve successful content collaboration.
Whether your mission is to improve external collaboration, increase scalability or focus on security and compliance, find out how content collaboration with Box can improve your ROI.
To find out more on how to improve your content journey, visit IBM ECM and Box: http://ibm.co/ibm-box-partnership
Data Virtualization Reference Architectures: Correctly Architecting your Solu...Denodo
Correctly Architecting your Solutions for Analytical & Operational Uses reviews the two main types of use cases that can be solved with the Denodo Platform. Both high concurrency scenarios and big reporting use cases are discussed in this presentation in a comparative way, explaining the different approaches that you must take to be successful in any situation.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/wdZgpo.
Bridging to a hybrid cloud data services architectureIBM Analytics
Enterprises are increasingly evolving their data infrastructures into entire cloud-facing environments. Interfacing private and public cloud data assets is a hallmark of initiatives such as logical data warehouses, data lakes and online transactional data hubs. These projects may involve deploying two or more of the following cloud-based data platforms into a hybrid architecture: Apache Hadoop, data warehouses, graph databases, NoSQL databases, multiworkload SQL databases, open source databases, data refineries and predictive analytics.
Data application developers, data scientists and analytics professionals are driving their organizations’ efforts to bridge their data to the cloud. Several questions are of keen interest to those who are driving an organization’s evolution of its data and analytics initiatives into more holistic cloud-facing environments:
• What is a hybrid cloud data services architecture?
• What are the chief applications and benefits of a hybrid cloud data services architecture?
• What are the best practices for bridging a logical data warehouse to the cloud?
• What are the best practices for bridging advanced analytics and data lakes to the cloud?
• What are the best practices for bridging an enterprise database hub to the cloud?
• What are the first steps to take for bridging private data assets to the cloud?
• How can you measure ROI from bridging private data to public cloud data services?
• Which case studies illustrate the value of bridging private data to the cloud?
Sign up now for a free 3-month trial of IBM Analytics for Apache Spark and IBM Cloudant, IBM dashDB or IBM DB2 on Cloud.
http://ibm.co/ibm-cloudant-trial
http://ibm.co/ibm-dashdb-trial
http://ibm.co/ibm-db2-trial
http://ibm.co/ibm-spark-trial
Bob Jones, CERN & HNSciCloud Coordinator gives an update on the HNSciCloud Pre-Commercial Procurement which is now in its Solution Prototyping phase. The presentation includes also an overview of the prototypes under development.
RAGLD - Rapid Assembly of Geo-Centred Linked Data ApplicationsJohn Goodwin
This talk will describe the RAGLD framework (Rapid Assembly of Geo-centred Linked Data) and examples will be given on how it can be used to make it easier to develop linked data applications.
As more linked data and open data emerges a need was identified to meet a rising demand for a suite of application developers’ tools to make it easier to bring together, use and exploit these diverse data sets. RAGLD aims to create a set of tools, components and services to make it easier to develop linked Data applications. This talk will describe the RAGLD framework and examples will be given on how it can be used.
This presentation, given by Bob Jones, CERN & HNSciCloud Coordinator, at the ESA-ESPI Workshop on “Space Data & Cloud Computing Infrastructures: Policies and Regulations”, describes what are the challenges and needs of the cloud users and explains how an hybrid cloud model can support them.
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data PipelinesDATAVERSITY
With the aid of any number of data management and processing tools, data flows through multiple on-prem and cloud storage locations before it’s delivered to business users. As a result, IT teams — including IT Ops, DataOps, and DevOps — are often overwhelmed by the complexity of creating a reliable data pipeline that includes the automation and observability they require.
The answer to this widespread problem is a centralized data pipeline orchestration solution.
Join Stonebranch’s Scott Davis, Global Vice President and Ravi Murugesan, Sr. Solution Engineer to learn how DataOps teams orchestrate their end-to-end data pipelines with a platform approach to managing automation.
Key Learnings:
- Discover how to orchestrate data pipelines across a hybrid IT environment (on-prem and cloud)
- Find out how DataOps teams are empowered with event-based triggers for real-time data flow
- See examples of reports, dashboards, and proactive alerts designed to help you reliably keep data flowing through your business — with the observability you require
- Discover how to replace clunky legacy approaches to streaming data in a multi-cloud environment
- See what’s possible with the Stonebranch Universal Automation Center (UAC)
Denodo DataFest 2016: What’s New in Denodo Platform – Demo and RoadmapDenodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/ptGwp7
Curious about product roadmap? In this session, we will review some of the new key features introduced this year in the Denodo Platform in areas such as performance, self-service, security and monitoring. We will also take a sneak peek at the most exciting features in the roadmap for Denodo 7.0.
In this session, you will learn:
• New performance-related features in big data scenarios
• New governance and self-service features
• New connectivity, data transformation, and enterprise-wide deployment features
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
CLOUD COMPUTING AND ITS APPLICATIONS IN DIGITAL LIBRARY SERVICESKoushik Pathak
WWW Service model, BBS and e-mail service model. On-site backup pros, on-site backup cons, cloud-based backup pros, cloud-based backup cons use for backup for digital data. User can access digital library from anywhere and anytime with the help of cloud computing. Cloud Computing architecture comprises of many cloud components, which are loosely coupled.
Accelerating Delivery of Data Products - The EBSCO WayMongoDB
EBSCO Information Services (EBSCO) is the leading provider of electronic journals, magazines, eBooks, audioBooks, and online research content for libraries, including hundreds of research databases, historical archives, point-of-care medical reference, and corporate learning tools serving millions of end users at tens of thousands of institutions worldwide. The EBSCO platform is a widely used platform serving the needs of researchers at all levels in academic institutions, schools, public libraries, hospitals, medical institutions, corporations and government institutions. Data is our business, and delivering new products quickly is our competitive advantage. We build hundreds of data products and accelerating the analysis, transformation of new datasets translates to revenue and competitiveness. And since our data is so varied, using MognoDB to store data flexibly and JSON Studio to analyze this data allows us to deliver products to market faster. In this session we will describe this process that helped us expedite delivery of new datasets, and give real examples of how data is used, analyzed and processed.
Slides from March1st 2018 webinar
Tracking research data footprints via integration with Research Graph
Presented by Ben Evans and Jingbo Wang from NCI
Polyglot Persistence and Database Deployment by Sandeep Khuperkar CTO and Dir...Ashnikbiz
This presentation covers What is Polyglot Persistence? And how should you choose the right Database Technology for a scalable architecture and introduction to Emerging world of Polyglot Persistence using open source database ecosystem.
Polyglot Persistence is not something which can be used as an out of the box product, but instead needs to be designed for each individual enterprise for its unique Data Architecture.
Collaboration is crucial to today’s workforce. Whether you are in a traditional office setting, work from home or travel extensively, there are tools needed to achieve successful content collaboration.
Whether your mission is to improve external collaboration, increase scalability or focus on security and compliance, find out how content collaboration with Box can improve your ROI.
To find out more on how to improve your content journey, visit IBM ECM and Box: http://ibm.co/ibm-box-partnership
Data Virtualization Reference Architectures: Correctly Architecting your Solu...Denodo
Correctly Architecting your Solutions for Analytical & Operational Uses reviews the two main types of use cases that can be solved with the Denodo Platform. Both high concurrency scenarios and big reporting use cases are discussed in this presentation in a comparative way, explaining the different approaches that you must take to be successful in any situation.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/wdZgpo.
Bridging to a hybrid cloud data services architectureIBM Analytics
Enterprises are increasingly evolving their data infrastructures into entire cloud-facing environments. Interfacing private and public cloud data assets is a hallmark of initiatives such as logical data warehouses, data lakes and online transactional data hubs. These projects may involve deploying two or more of the following cloud-based data platforms into a hybrid architecture: Apache Hadoop, data warehouses, graph databases, NoSQL databases, multiworkload SQL databases, open source databases, data refineries and predictive analytics.
Data application developers, data scientists and analytics professionals are driving their organizations’ efforts to bridge their data to the cloud. Several questions are of keen interest to those who are driving an organization’s evolution of its data and analytics initiatives into more holistic cloud-facing environments:
• What is a hybrid cloud data services architecture?
• What are the chief applications and benefits of a hybrid cloud data services architecture?
• What are the best practices for bridging a logical data warehouse to the cloud?
• What are the best practices for bridging advanced analytics and data lakes to the cloud?
• What are the best practices for bridging an enterprise database hub to the cloud?
• What are the first steps to take for bridging private data assets to the cloud?
• How can you measure ROI from bridging private data to public cloud data services?
• Which case studies illustrate the value of bridging private data to the cloud?
Sign up now for a free 3-month trial of IBM Analytics for Apache Spark and IBM Cloudant, IBM dashDB or IBM DB2 on Cloud.
http://ibm.co/ibm-cloudant-trial
http://ibm.co/ibm-dashdb-trial
http://ibm.co/ibm-db2-trial
http://ibm.co/ibm-spark-trial
Bob Jones, CERN & HNSciCloud Coordinator gives an update on the HNSciCloud Pre-Commercial Procurement which is now in its Solution Prototyping phase. The presentation includes also an overview of the prototypes under development.
RAGLD - Rapid Assembly of Geo-Centred Linked Data ApplicationsJohn Goodwin
This talk will describe the RAGLD framework (Rapid Assembly of Geo-centred Linked Data) and examples will be given on how it can be used to make it easier to develop linked data applications.
As more linked data and open data emerges a need was identified to meet a rising demand for a suite of application developers’ tools to make it easier to bring together, use and exploit these diverse data sets. RAGLD aims to create a set of tools, components and services to make it easier to develop linked Data applications. This talk will describe the RAGLD framework and examples will be given on how it can be used.
This presentation, given by Bob Jones, CERN & HNSciCloud Coordinator, at the ESA-ESPI Workshop on “Space Data & Cloud Computing Infrastructures: Policies and Regulations”, describes what are the challenges and needs of the cloud users and explains how an hybrid cloud model can support them.
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data PipelinesDATAVERSITY
With the aid of any number of data management and processing tools, data flows through multiple on-prem and cloud storage locations before it’s delivered to business users. As a result, IT teams — including IT Ops, DataOps, and DevOps — are often overwhelmed by the complexity of creating a reliable data pipeline that includes the automation and observability they require.
The answer to this widespread problem is a centralized data pipeline orchestration solution.
Join Stonebranch’s Scott Davis, Global Vice President and Ravi Murugesan, Sr. Solution Engineer to learn how DataOps teams orchestrate their end-to-end data pipelines with a platform approach to managing automation.
Key Learnings:
- Discover how to orchestrate data pipelines across a hybrid IT environment (on-prem and cloud)
- Find out how DataOps teams are empowered with event-based triggers for real-time data flow
- See examples of reports, dashboards, and proactive alerts designed to help you reliably keep data flowing through your business — with the observability you require
- Discover how to replace clunky legacy approaches to streaming data in a multi-cloud environment
- See what’s possible with the Stonebranch Universal Automation Center (UAC)
Augmentation, Collaboration, Governance: Defining the Future of Self-Service BIDenodo
Watch full webinar here: https://bit.ly/3zVJRRf
According to Dresner Advisory’s 2020 Self-Service Business Intelligence Market Study, 62% of the responding organizations say self-service BI is critical for their business. If we look deeper into the need for today’s self-service BI, it’s beyond some Executives and Business Users being enabled by IT for self-service dashboarding or report generation. Predictive analytics, self-service data preparation, collaborative data exploration are all different facets of new generation self-service BI. While democratization of data for self-service BI holds many benefits, strict data governance becomes increasingly important alongside.
In this session we will discuss:
- The latest trends and scopes of self-service BI
- The role of logical data fabric in self-service BI
- How Denodo enables self-service BI for a wide range of users - Customer case study on self-service BI
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/32c6TnG
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- About the success McCormick has had as a result of seasoning the Machine Learning and Blockchain Landscape with data virtualization
When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
Denodo DataFest 2016: Comparing and Contrasting Data Virtualization With Data...Denodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/Bvmvc9
Data prep and data blending are terms that have come to prominence over the last year or two. On the surface, they appear to offer functionality similar to data virtualization…but there are important differences!
In this session, you will learn:
• How data virtualization complements or contrasts technologies such as data prep and data blending
• Pros and cons of functionality provided by data prep, data catalog and data blending tools
• When and how to use these different technologies to be most effective
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
Open Data management is still not trivial nor sustainable - COMSODE results are here to bring automation to publication and management of Open Data in public institutions and companies. Presentation includes Open Data Ready standard proposal, three use cases and invitation for Horizon 2020 projects 2016.
Analytical Innovation: How to Build the Next Generation Data PlatformVMware Tanzu
There was a time when the Enterprise Data Warehouse (EDW) was the only way to provide a 360-degree analytical view of the business. In recent years many organizations have deployed disparate analytics alternatives to the EDW, including: cloud data warehouses, machine learning frameworks, graph databases, geospatial tools, and other technologies. Often these new deployments have resulted in the creation of analytical silos that are too complex to integrate, seriously limiting global insights and innovation.
Join guest speaker, 451 Research’s Jim Curtis and Pivotal’s Jacque Istok for an interactive discussion about some of the overarching trends affecting the data warehousing market, as well as how to build a next generation data platform to accelerate business innovation. During this webinar you will learn:
- The significance of a multi-cloud, infrastructure-agnostic analytics
- What is working and what isn’t, when it comes to analytics integration
- The importance of seamlessly integrating all your analytics in one platform
- How to innovate faster, taking advantage of open source and agile software
Speakers: James Curtis, Senior Analyst, Data Platforms & Analytics, 451 Research & Jacque Istok, Head of Data, Pivotal
SharePoint migrations rarely turn out as you plan them. They are sometimes risky and too often take longer than planned. Over the last 10 years of migrating from SharePoint 2003, 2007, 2010 to the latest versions of SharePoint/Office 365 we’ve seen a consistent theme -- organizations underestimate the complexity and level of effort required for a successful, smooth migration.
Whether you are planning to complete your own migration, or engaging a vendor to assist, this webinar will discuss precautions you can take to avoid the slippery slope experienced in SharePoint migrations.
Join Jill Hannemann, Adam Levithan and our special guest Ryan Tully from Metalogix as they:
- Go through the assessment steps to understand the full landscape of your existing SharePoint environment
- Review methodologies for moving content from one environment to the next
- Outline precautions you should take in migrating to either SharePoint 2013 on-premise or online
SharePoint migrations rarely turn out as you plan them. They are sometimes risky and too often take longer than planned. Over the last 10 years of migrating from SharePoint 2003, 2007, 2010 to the latest versions of SharePoint/Office 365 we’ve seen a consistent theme -- organizations underestimate the complexity and level of effort required for a successful, smooth migration.
Whether you are planning to complete your own migration, or engaging a vendor to assist, this webinar will discuss precautions you can take to avoid the slippery slope experienced in SharePoint migrations.
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
"Towards GeneratingPolicy-compliant Datasets" by Christophe Debruyne, Harshvardhan J. Pandit, Dave Lewis, Declan O’Sullivan. Presented at the The 13th IEEE International Conference on SEMANTIC COMPUTING
Jan 30 - Feb 1, 2019, Newport Beach, California
Introduction of streaming data, difference between batch processing and stream processing, Research issues in streaming data processing, Performance evaluation metrics , tools for stream processing.
How a Time Series Database Contributes to a Decentralized Cloud Object Storag...InfluxData
In this presentation, you'll learn how InfluxDB is a component to Storj’s Tardigrade service and workflows. John Gleeson and Ben Sirb of Storj Lab will Storj’s redefinition of a cloud object storage network, how InfluxData fits into Storj’s Open Source Partner Program, and how to collect and manage high-volume, real-time telemetry data from a distributed network.
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch: https://bit.ly/2DYsUhD
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- How Prologis accelerated their use of Machine Learning with data virtualization
Data Con LA 2018 - Enabling real-time exploration and analytics at scale at H...Data Con LA
Enabling real-time exploration and analytics at scale to drive operational intelligence at Hulu by Indrasis Mondal, Director, Data Engineering and Data Products, Hulu
Data is one of most powerful assets for companies today and a key driver for innovation, product development and business efficiency. Operational intelligence allows modern organization to use that data asset in real-time to enable immediate insights to their business operations and allow rapid decision making for strategic advantage. In this presentation we will walk through the operational intelligence capabilities Hulu has built to process tens of millions of events per minute to enable fast exploration of data and real-time decision making .
Similar to How to make your data count webinar, 26 Nov 2018 (20)
Presentation by Dr Steve McEachern, ADA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Hugo Leroux and Liming Zhu, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Kelly Hart, ONDC in PM&C, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Prof Chris Rowe, ADNet, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Investigator-initiated clinical trials: a community perspectiveARDC
Presentation by Miranda Cumpston, ACTA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Merran Smith, PHRN, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
International perspective for sharing publicly funded medical research dataARDC
Presentation by Olivier Salvado, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Prof Lisa Askie, ANZCTR, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Davina Ghersi, NHMRC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Adrian Burton, ARDC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
FAIR for the future: embracing all things dataARDC
FAIR for the future: embracing all things data - Natasha Simons, Keith Russell and Liz Stokes, presented at Taylor & Francis Scholarly Summits in Sydney 11 Feb 2019 and Melbourne 14 Feb 2019.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
8. 1. Formal recommendation for measuring data usage
2. Develop Hub for all Data Level Metrics (DLM)
3. Make usage tracking easier
4. Drive adoption by showing how it can be done
(easily)
5. Engage across all research communities
6. Iterate!
8
Make Data
Count
2017 -
2019
22. Scholix is not a thing - it is a change initiative
MDC & Scholix work hand in hand to advocate for best data
citation practices
● Scholix is an information framework for submitting data
citations
● MDC allows for displaying data citations back at the
repository
24. Why it is important
Community has long
grappled with the problem
of assessing and tracking
the results of scholarship
● Researchers
● Repositories
● Funders
● Publishers
25. Five simple steps to Make YOUR Data Count
1. Read the data usage metrics standard “Code of Practice for
Research Data”
2. Process your usage logs against this standard
3. Send processed and standardized usage logs to an open hub
4. Pull usage and citation metrics from an open hub
5. Display standardized usage and citation metrics on your
repository interface
26. Getting Started
We have built a “Getting Started” guide walking through
these steps as implemented in CDL’s Dash
https://github.com/CDLUC3/Make-Data-Count/blob/master/g
etting-started.md
30. Standardized Logs
● Specialized logs that are processed against Code of Practice
● Views
● Downloads
● Users: at the country level, access during a session
● Session: de-duplicate access to page within 30 seconds
35. Usage Metrics Hub (hosted by DataCite)
● Aggregator of research data usage reports
● Usage reports are made available via API (in original JSON format) and soon
web interface and CSV
● Usage reports are broken down by dataset (and request method), and can
then be aggregated over time
● Information in usage reports can be combined with data citations and dataset
metadata
37. Pulling Usage and Citations
● Data usage metrics and citations are made available via public API, with one
“event” for each data citation or monthly usage count.
● Data citations are provided by DataCite metadata (i.e. come from data
repositories) and Crossref, with more to come
● Currently separate APIs for usage and citations, and a third API for dataset
metadata, will be combined into single API for easier retrieval of information
39. Looking Ahead
● Outreach and Adoption
○ Repository & Publishers
● Iterating on our implementation
○ Adding volume and usage by regions
○ Provide aggregation through DataCite hub
○ Beyond the DOI: metrics for other types of identifiers
○ Possible: altmetrics