The document discusses the formation of the Gephi Consortium to advance the open-source Gephi network analysis platform. The consortium aims to build reusable parts of Gephi, improve the technology at low cost, and create interoperability standards. It will provide a legal structure for the community and infrastructure for research and development efforts to build generic parts of Gephi through fund raising and community support.
SP1: Exploratory Network Analysis with GephiJohn Breslin
ICWSM 2011 Tutorial
Sebastien Heymann and Julian Bilcke
Gephi is an interactive visualization and exploration software for all kinds of networks and relational data: online social networks, emails, communication and financial networks, but also semantic networks, inter-organizational networks and more. Designed to make data navigation and manipulation easy, it aims to fulfill the complete chain from data importing to aesthetics refinements and interaction. Users interact with the visualization and manipulate structures, shapes and colors to reveal hidden properties. The goal is to help data analysts to make hypotheses, intuitively discover patterns or errors in large data collections.
In this tutorial we will provide a hands-on demonstration of the essential functionalities of Gephi, based on a real case scenario: the exploration of student networks from the "Facebook100" dataset (Social Structure of Facebook Networks, Amanda L. Traud et al, 2011). The participants will be guided step by step through the complete chain of representation, manipulation, layout, analysis and aesthetics refinements. Particular focus will be put on filters and metrics for the creation of their first visualizations. They will be incited to compare the hypotheses suggested by their own exploration to the results actually published in the academic paper afterwards. They finally will walk away with the practical knowledge enabling them to use Gephi for their own projects. The tutorial is intended for professionals, researchers and graduates who wish to learn how playing during a network exploration can speed up their studies.
Sébastien Heymann is a Ph.D. Candidate in Computer Science at Université Pierre et Marie Curie, France. His research at the ComplexNetworks team focuses on the dynamics of realworld networks. He leads the Gephi project since 2008, and is the administrator of the Gephi Consortium.
Julian Bilcke is a Software Engineer at ISC-PIF (Complex Systems Institute of Paris, France). He is a founder and a developer for the Gephi project since 2008.
Creativity Meets Rationale - Collaboration Patterns for Social InnovationCommunitySense
Collaborative communities require a wide range of face-to-face and online communication tools. Their socio-technical systems continuously grow, driven by evolving stakeholder requirements and newly available technologies. Designing tool systems that (continue to) match authentic community needs is not trivial. Collaboration patterns can help community members specify customized systems that capture their unique requirements, while reusing lessons learnt by other communnities. Such patterns are an excellent example of combining the strengths of creativity and rationale. In this chapter, we explore the role that collaboration patterns can play in designing the socio-technical infrastructure for collaborative communities. We do so via a cross-case analysis of three Dutch social innovation communities simultaneously being set-up. Our goal with this case study is two-fold: (1) understanding what social innovation is from a socio-technical lens and (2) exploring how the rationale of collaboration patterns can be used to develop creative socio-technical solutions for working communities.
Collaboration Patterns as Building Blocks for Community InformaticsCommunitySense
Community Informatics is a wide-ranging field of inquiry and practice, with many paradigms, disciplines, and perspectives intersecting. Community Informatics research and practice build on several methodological pillars: contexts/values, cases, process/methodology, and systems. Socio-technical patterns and pattern languages are the glue that help connect these pillars. Patterns define relatively stable solutions to recurring problems at the right level of abstraction, which means that they are concrete enough to be useful, while also sufficiently abstract to be reusable. The goal of this paper is to outline a practical approach to improve CI research and practice through collaboration patterns. This approach should help to strengthen the analysis, design, implementation, and evaluation of socio-technical community systems. The methodology is illustrated with examples from the ESSENCE (E-Science/Sensemaking/Climate Change) community.
Structured data on the Web frequently referred to as knowledge graphs consists of large number of datasets representing diverse domains. Widely used commercial applications such as entity recommendation, search, question answering and knowledge discovery use these knowledge graphs as their knowledge source. Majority of these applications have a particular domain of interest, hence require only the segment of the Web of data representing that domain (e.g., movie, biomedical, sports). In fact, leveraging the entire Web of data for a domain-specific application is not only computationally intensive, but also the irrelevant portion negatively impact the accuracy of the application. Hence, finding the relevant portion of the Web of data for domain-specific applications has become a paramount issue. Identifying the relevant portion of the Web of data consists of two sub-tasks; 1) find the relevant datasets that contain knowledge on the domain of interest, and 2) extract the subgraph representing domain of interest from the knowledge graphs that represent multiple domains (e.g., DBpedia, YAGO, Freebase). In this talk, I will discuss both data-driven and knowledge-driven approaches to solve these two sub-tasks. The domain-specific subgraphs extracted by our approach were 80% less in size in terms of the number of paths compared to original KG and resulted in more than tenfold reduction of required computational time for domain-specific tasks, yet produced better accuracy on domain-specific applications. We believe that this work can significantly contribute for utilizing knowledge graphs for domain-specific applications, specially with the explosive growth in the creation of knowledge graphs.
SP1: Exploratory Network Analysis with GephiJohn Breslin
ICWSM 2011 Tutorial
Sebastien Heymann and Julian Bilcke
Gephi is an interactive visualization and exploration software for all kinds of networks and relational data: online social networks, emails, communication and financial networks, but also semantic networks, inter-organizational networks and more. Designed to make data navigation and manipulation easy, it aims to fulfill the complete chain from data importing to aesthetics refinements and interaction. Users interact with the visualization and manipulate structures, shapes and colors to reveal hidden properties. The goal is to help data analysts to make hypotheses, intuitively discover patterns or errors in large data collections.
In this tutorial we will provide a hands-on demonstration of the essential functionalities of Gephi, based on a real case scenario: the exploration of student networks from the "Facebook100" dataset (Social Structure of Facebook Networks, Amanda L. Traud et al, 2011). The participants will be guided step by step through the complete chain of representation, manipulation, layout, analysis and aesthetics refinements. Particular focus will be put on filters and metrics for the creation of their first visualizations. They will be incited to compare the hypotheses suggested by their own exploration to the results actually published in the academic paper afterwards. They finally will walk away with the practical knowledge enabling them to use Gephi for their own projects. The tutorial is intended for professionals, researchers and graduates who wish to learn how playing during a network exploration can speed up their studies.
Sébastien Heymann is a Ph.D. Candidate in Computer Science at Université Pierre et Marie Curie, France. His research at the ComplexNetworks team focuses on the dynamics of realworld networks. He leads the Gephi project since 2008, and is the administrator of the Gephi Consortium.
Julian Bilcke is a Software Engineer at ISC-PIF (Complex Systems Institute of Paris, France). He is a founder and a developer for the Gephi project since 2008.
Creativity Meets Rationale - Collaboration Patterns for Social InnovationCommunitySense
Collaborative communities require a wide range of face-to-face and online communication tools. Their socio-technical systems continuously grow, driven by evolving stakeholder requirements and newly available technologies. Designing tool systems that (continue to) match authentic community needs is not trivial. Collaboration patterns can help community members specify customized systems that capture their unique requirements, while reusing lessons learnt by other communnities. Such patterns are an excellent example of combining the strengths of creativity and rationale. In this chapter, we explore the role that collaboration patterns can play in designing the socio-technical infrastructure for collaborative communities. We do so via a cross-case analysis of three Dutch social innovation communities simultaneously being set-up. Our goal with this case study is two-fold: (1) understanding what social innovation is from a socio-technical lens and (2) exploring how the rationale of collaboration patterns can be used to develop creative socio-technical solutions for working communities.
Collaboration Patterns as Building Blocks for Community InformaticsCommunitySense
Community Informatics is a wide-ranging field of inquiry and practice, with many paradigms, disciplines, and perspectives intersecting. Community Informatics research and practice build on several methodological pillars: contexts/values, cases, process/methodology, and systems. Socio-technical patterns and pattern languages are the glue that help connect these pillars. Patterns define relatively stable solutions to recurring problems at the right level of abstraction, which means that they are concrete enough to be useful, while also sufficiently abstract to be reusable. The goal of this paper is to outline a practical approach to improve CI research and practice through collaboration patterns. This approach should help to strengthen the analysis, design, implementation, and evaluation of socio-technical community systems. The methodology is illustrated with examples from the ESSENCE (E-Science/Sensemaking/Climate Change) community.
Structured data on the Web frequently referred to as knowledge graphs consists of large number of datasets representing diverse domains. Widely used commercial applications such as entity recommendation, search, question answering and knowledge discovery use these knowledge graphs as their knowledge source. Majority of these applications have a particular domain of interest, hence require only the segment of the Web of data representing that domain (e.g., movie, biomedical, sports). In fact, leveraging the entire Web of data for a domain-specific application is not only computationally intensive, but also the irrelevant portion negatively impact the accuracy of the application. Hence, finding the relevant portion of the Web of data for domain-specific applications has become a paramount issue. Identifying the relevant portion of the Web of data consists of two sub-tasks; 1) find the relevant datasets that contain knowledge on the domain of interest, and 2) extract the subgraph representing domain of interest from the knowledge graphs that represent multiple domains (e.g., DBpedia, YAGO, Freebase). In this talk, I will discuss both data-driven and knowledge-driven approaches to solve these two sub-tasks. The domain-specific subgraphs extracted by our approach were 80% less in size in terms of the number of paths compared to original KG and resulted in more than tenfold reduction of required computational time for domain-specific tasks, yet produced better accuracy on domain-specific applications. We believe that this work can significantly contribute for utilizing knowledge graphs for domain-specific applications, specially with the explosive growth in the creation of knowledge graphs.
Expanding the Academic Research Community: Building Bridges into Society with...CommunitySense
Academic research is under threat from issues like a lack of resources, fraud, and societal isolation. Such issues weaken the academic research process, from the framing of research questions to the evaluation of impact. After (re)defining this process, we examine how the academic research community could be expanded using the Internet. We examine two existing science-society collaborations that focus on data collection and analysis and then proceed with a scenario that covers expanding research stages like research question framing, dissemination, and impact assessment.
This short set of slides summarizes the characteristics of people who play specific roles in networks. In a social network analysis, people in these roles can be discovered by running mathematical algorithms through the social graphs. But you don't need to be an algorithm to spot some of these people in your networks!
The Navigation Layer - Making Sense Of It AllJim Kalbach
As we accumulate more and more information online, we’re inclined to add more and more metadata—so we can order it, manage it, and re-find it. This growing belt of metadata is referred to as the “navigation layer.“ It‘s the series of filters, categories, tags, and other devices that let us to interact with information so we can sift out the noise.
What’s more, the navigation layer isn’t just about finding information—it can also help us make sense of the stuff we find. Sentiment analysis and entity extraction, for example, provide new insights into the information we come across. Ultimately, the navigation layer can point to high-order patterns that increase understanding.
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Cultivating Sustainable Software For ResearchNeil Chue Hong
Keynote given at the NSF Cyberinfrastructure Software and Sustainability Workshop, March 26th-27th 2009, Indianapolis.
Exploration of software sustainability based on experiences from UK.
Research Software Sustainability takes a VillageCarole Goble
The Research Software Alliance (ReSA) and the Netherlands eScience Center hosted a two-day international workshop to set the future agenda for national and international funders to support sustainable research software.
As the importance of software in research has become increasingly apparent, so has the urgent need to sustain it. Funders can play a crucial role in this respect by ensuring structural support. Over the past few years, a variety of methods for sustaining research software have been explored, including improving and extending funding policies and instruments. During the workshop, funding organizations joined forces to explore how they can effectively contribute to making research software sustainable.
This keynote helped frame the discussion from the perspective of community involvement in research software sustainability.
https://future-of-research-software.org/
this talk is available at Goble, Carole. (2022, November 8). Research Software Sustainability takes a Village. International funders workshop, The Future of Research Software, Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.7304596
Opening Slides from ION Belfast by Chris Grundemann of the Internet Society. Introduces the Internet Society and the Deploy360 Programme that hosts the ION Conference Series.
More Than Just a Meeting Place: Leveraging online tools for actionifPeople
More than just a meeting place, the Internet is a tool for online collaboration. This presentation goes beyond using the web as a networking tool and looks at how to leverage online tools to get people to work together effectively. Presentation by ifPeople cofounders Christopher Johnson and Tirza Hollenhorst at the Pegasus Communications "Systems Thinking in Action" conference in Seattle, WA in November 2007.
Data ecosystems: turning data into public valueSlim Turki, Dr.
Africa Information Highway Live Exchange #Session 7
8 October 2021
The AIH Live Exchange between the Africa Information Highway Team, partners and countries is a free monthly webinar hosted by the African Development Bank to discuss topics related to government data and statistics. This webinar series is the main platform for countries to share their experiences and best practices around open data including using their Open Data Platform of the AIH.
This session is co-organized with the Luxembourg Institute of Science and Technology (LIST) which is a mission-driven Research and Technology Organization (RTO) that develops advanced technologies and delivers innovative products and services to industry and society. These innovations can also be used to solve several societal challenges, particularly in the areas of the environment, security, education and culture, sustainable development, as well as the efficient use of resources.
Official statistical data are recognized as high-value datasets for the society and economy, to enrich research, inform decision making or develop new products and services. The use of these authoritative data sources contributes to building a society with more empowered people, better policies, more effective and accountable decision-making, greater participation and stronger democratic mechanisms.
Official statistics are produced to be used and re-used to make an impact on society through a higher degree of openness and transparency while ensuring confidentiality and, at the same time, providing equal access to information to citizens.
The value of data lies in its use and re-use. In this interactive webinar, you will learn new techniques to improve the use and re-use of your statistical data, going beyond the provision logic and adopting the ecosystem mindset. You will:
● Sharpen your capacity at identifying and engaging users and re-users and stakeholders (data ecosystem mapping)?
● Effectively tackle technical and organizational barriers to stimulate data use and re-use?
● Smartly orchestrate a self-sustainable data ecosystem to increase the impact of statistical data.
This session is an opportunity for Regional members countries to '' Sharpen their skills in making data used and re-used by developing an ecosystem mindset to effectively build sustainable community of users around their Open Data Platform thus promoting transparency and better decision-making”
Presented by Michael Victor, Abenet Yabowork, Jane Poole, Harrison Njamba, Erick Rutto and Peter Ballantyne at the ILRI open access week workshop, ILRI, Nairobi, 23-25 October 2019
Expanding the Academic Research Community: Building Bridges into Society with...CommunitySense
Academic research is under threat from issues like a lack of resources, fraud, and societal isolation. Such issues weaken the academic research process, from the framing of research questions to the evaluation of impact. After (re)defining this process, we examine how the academic research community could be expanded using the Internet. We examine two existing science-society collaborations that focus on data collection and analysis and then proceed with a scenario that covers expanding research stages like research question framing, dissemination, and impact assessment.
This short set of slides summarizes the characteristics of people who play specific roles in networks. In a social network analysis, people in these roles can be discovered by running mathematical algorithms through the social graphs. But you don't need to be an algorithm to spot some of these people in your networks!
The Navigation Layer - Making Sense Of It AllJim Kalbach
As we accumulate more and more information online, we’re inclined to add more and more metadata—so we can order it, manage it, and re-find it. This growing belt of metadata is referred to as the “navigation layer.“ It‘s the series of filters, categories, tags, and other devices that let us to interact with information so we can sift out the noise.
What’s more, the navigation layer isn’t just about finding information—it can also help us make sense of the stuff we find. Sentiment analysis and entity extraction, for example, provide new insights into the information we come across. Ultimately, the navigation layer can point to high-order patterns that increase understanding.
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Cultivating Sustainable Software For ResearchNeil Chue Hong
Keynote given at the NSF Cyberinfrastructure Software and Sustainability Workshop, March 26th-27th 2009, Indianapolis.
Exploration of software sustainability based on experiences from UK.
Research Software Sustainability takes a VillageCarole Goble
The Research Software Alliance (ReSA) and the Netherlands eScience Center hosted a two-day international workshop to set the future agenda for national and international funders to support sustainable research software.
As the importance of software in research has become increasingly apparent, so has the urgent need to sustain it. Funders can play a crucial role in this respect by ensuring structural support. Over the past few years, a variety of methods for sustaining research software have been explored, including improving and extending funding policies and instruments. During the workshop, funding organizations joined forces to explore how they can effectively contribute to making research software sustainable.
This keynote helped frame the discussion from the perspective of community involvement in research software sustainability.
https://future-of-research-software.org/
this talk is available at Goble, Carole. (2022, November 8). Research Software Sustainability takes a Village. International funders workshop, The Future of Research Software, Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.7304596
Opening Slides from ION Belfast by Chris Grundemann of the Internet Society. Introduces the Internet Society and the Deploy360 Programme that hosts the ION Conference Series.
More Than Just a Meeting Place: Leveraging online tools for actionifPeople
More than just a meeting place, the Internet is a tool for online collaboration. This presentation goes beyond using the web as a networking tool and looks at how to leverage online tools to get people to work together effectively. Presentation by ifPeople cofounders Christopher Johnson and Tirza Hollenhorst at the Pegasus Communications "Systems Thinking in Action" conference in Seattle, WA in November 2007.
Data ecosystems: turning data into public valueSlim Turki, Dr.
Africa Information Highway Live Exchange #Session 7
8 October 2021
The AIH Live Exchange between the Africa Information Highway Team, partners and countries is a free monthly webinar hosted by the African Development Bank to discuss topics related to government data and statistics. This webinar series is the main platform for countries to share their experiences and best practices around open data including using their Open Data Platform of the AIH.
This session is co-organized with the Luxembourg Institute of Science and Technology (LIST) which is a mission-driven Research and Technology Organization (RTO) that develops advanced technologies and delivers innovative products and services to industry and society. These innovations can also be used to solve several societal challenges, particularly in the areas of the environment, security, education and culture, sustainable development, as well as the efficient use of resources.
Official statistical data are recognized as high-value datasets for the society and economy, to enrich research, inform decision making or develop new products and services. The use of these authoritative data sources contributes to building a society with more empowered people, better policies, more effective and accountable decision-making, greater participation and stronger democratic mechanisms.
Official statistics are produced to be used and re-used to make an impact on society through a higher degree of openness and transparency while ensuring confidentiality and, at the same time, providing equal access to information to citizens.
The value of data lies in its use and re-use. In this interactive webinar, you will learn new techniques to improve the use and re-use of your statistical data, going beyond the provision logic and adopting the ecosystem mindset. You will:
● Sharpen your capacity at identifying and engaging users and re-users and stakeholders (data ecosystem mapping)?
● Effectively tackle technical and organizational barriers to stimulate data use and re-use?
● Smartly orchestrate a self-sustainable data ecosystem to increase the impact of statistical data.
This session is an opportunity for Regional members countries to '' Sharpen their skills in making data used and re-used by developing an ecosystem mindset to effectively build sustainable community of users around their Open Data Platform thus promoting transparency and better decision-making”
Presented by Michael Victor, Abenet Yabowork, Jane Poole, Harrison Njamba, Erick Rutto and Peter Ballantyne at the ILRI open access week workshop, ILRI, Nairobi, 23-25 October 2019
The European Open Science Cloud: just what is it?Carole Goble
Presented at Jisc and CNI leaders conference 2018, 2 July 2018, Oxford, UK (https://www.jisc.ac.uk/events/jisc-and-cni-leaders-conference-02-jul-2018). The European Open Science Cloud. What exactly is it? In principle it is conceived as a virtual environment with open and seamless services for storage, management, analysis and re-use of research data, across borders and scientific disciplines. How? By federating existing scientific data infrastructures, currently dispersed across disciplines and Member States. In practice, what it is depends on the stakeholder. To European Research Infrastructures it’s a coordinated mission to organise and exchange their data, metadata, software and services to be FAIR – Findable, Accessible, Interoperable, Reusable – and to use e-Infrastructures, either EU or commercial. To EU e-Infrastructures offering data storage and cloud services, it’s a funding mission to integrate their services, policies and organisational structures, and to be used by the Research Infrastructures. To agencies it’s a means to promote Open Science, standardisation, cross-disciplinary research and coordinated investment with a dream of a “one stop shop” for researchers. And for Libraries?
Open Innovation - Best Practices for Raw Material CompaniesTimo Ropponen
Mining and raw materials companies have longer and costly innovation cycles.The objective of the project was to build on top of the established Open Innovation (OI) body of knowledge a set of best practices and tools specifically tailored to raw material companies. The project consisted of an open innovation assessment study and piloting a digital collaboration tool in an online OI workshop in a mining company.
2018 is the Open Source Rookies report’s 10th anniversary, brought to you by Black Duck by Synopsys. This infographic shows the impressive number of projects started in 2017 and the distribution across the world and a wide range of categories. Narrowing them down was hard! The open source community continues to produce innovative and influential open source projects.
Presentation investigating the state of FAIR practice and what is needed to turn FAIR data into reality given at the Danish FAIR conference in Copenhagen on 20th November 2018. https://vidensportal.deic.dk/en/Programme/FAIR_Toolbox_Nov2018 The presentation reflect on recent FAIR studies and international initiatives and outlines the recommendations emerging from the European Commission's FAIR Data Expert Group report - http://tinyurl.com/FAIR-EG
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. The Mission
We, Gephi Team, INIST-CNRS, Linkfluence and WebAtlas, invite others to join
us on our mission to advance the development of Gephi, a platform for
relational data visualization and analysis, through the formation of the Gephi
Consortium.
We are interested to hear from companies, research laboratories and
individuals who share our core goals:
• to build generic and reusable parts of Gephi,
• to improve the technology at low cost,
• and to create standards to ensure interoperability.
3. Why a Consortium?
What have research laboratories, big data companies and human societies in
common? The recurrent need to map and understand complex
phenomenons, revealing patterns and trends in relational data. Called “Data
Science”, this activity helps to turn data into products and to perform
researches on the data deluge.
But an efficient technology is mandatory to store, compute and visualize
evolving data at scale.
The Gephi project aims to provide a tool to visualize, filter and interact with
every kind of relational data in real time. A generic infrastructure and long-
term vision are the keys to overcome these challenges. The Consortium is a
solution to foster the evolution and exploitation of the Gephi project.
4. A Pragmatic Vision
The ability to take data -- to be able to understand it, to process it, to extract
value from it, to visualize it, to communicate it -- is already at work.
Google, Amazon, Facebook, and LinkedIn have all tapped into their
datastreams and made that the core of their success. Archeologists are more
concerned now by mining information than doing fieldwork. Citizen
organizations track interlocking directorates, while the Worldbank, US and UK
governments follow the Open Data movement. Elsevier and Thomson-Reuters
open their APIs to build high-value applications for information science.
Data visualization and analysis is the cornerstone of this evolution, and Gephi
can help in this task.
5. What is the Consortium?
A not-for-profit corporation under the French law of
July 1st, 1901 – equivalent of US 501(c)(3).
• Give the community a legal structure
• Provide a research infrastructure
• Propose legal solutions to use Gephi in business
Purpose:
R&D effort to build generic and reusable parts of Gephi
6. Activities
• Concentrate collective efforts and apply for fundings
• Provide leadership in developments (roadmaps,
mentoring, support)
• Organize and support the community
7. Cost and Involvment
Strategic Corporate Membership Individual
Membership Membership
Fees €10,000 €3,000 €45
3-year commitment 1-year commitment
Board of 1 seat by right As many representatives 1 representative
Directors as Strategic Members
• Members elect the Management Committee
• Members vote for projects to support
• Members define expenses priorities
8. Benefits
• Strategic directions made according to the common
needs of members
• Personalized business made possible
• Competitive technology improved at low cost
• Technology risk reduced by creating standards
• Connections with skilled developers and talented
researchers
• Open innovation
9. Gephi in a Nutshell
« Like Photoshop™ for graphs. »
Helps data analysts to reveal patterns and trends,
highlight outliers and tells story with their data.
• Gephi is a network visualisation platform
• Open-source, supported by a community
• Built for performance and usability
• Extensible, add plug-ins
• Windows, Mac OS X, Linux
10. Gephi Key Figures
From 2008 to 2010:
2,000,000$ development cost – COCOMO evaluation
150,000 lines of code
33,000 software downloads
20 contributors
8 contributor countries
3 UI languages: English, French and Spanish
600,000 pages viewed/year on gephi.org
11. Gephi Applications
• Social Network Analysis
• Semantic Web
• Visual Analytics
• Web mapping
• Infrastructure monitoring
• Business networks study
• Molecular and gene interaction research
• …