A few notes here about the UK Ontology Network Meeting (http://dream.inf.ed.ac.uk/events/ukont-13/2013_workshop_program.html) I've attended on 11/Apr/2013.
This social media marketing plan template outlines 10 steps to develop an effective social media strategy. It includes defining key issues and desired outcomes, gauging your existing online presence, conducting a SWOT analysis, determining online competition, setting SMART objectives, segmenting and targeting audiences, choosing appropriate social media tools, developing a press release, measuring performance, and ongoing monitoring and adjustments. The template provides guiding questions under each step to help structure the planning process.
El documento describe los procedimientos básicos para mover cargas unitarias, incluyendo levantar la carga debajo de ella, insertar un elemento de elevación dentro de la carga, prensar la carga entre dos superficies elevadoras, y suspender la carga. También describe tipos de cargas unitarias como patines, paletas, cargas sobre láminas o bastidores, cargas en recipientes, y cargas independientes. Finalmente, discute las relaciones entre cargas unitarias y otros factores como vehículos, recipientes para despacho, soport
myequivalents is a system to manage cross-references between entities that can be identified by pairs composed of a service name (e.g., EBI's ArrayExpress, Wikipedia) and an accession (e.g., E-MEXP-2514, Barack_Obama). For those familiar with the Semantic Web, we plan to support identification of entities via URIs and the owl:sameAs property. For those who already know MIRIAM and identifiers.org, myequivalents is more general than them and we plan to support these services in future.
Marco Brandizi and Keywan Hassani-Pak, Rothamsted Research, Invited Presentation at SWAT4HCLS 2022.
FAIR data principles are being a driving force in life sciences and other scientific domains, helping researchers to share their data and free all of their potential to integrate information and do novel discoveries. Knowledge graphs are an ever more popular paradigm to model data according to such principles, and technologies such as graph databases are emerging as complementary to approaches like linked data. All of this includes the agronomy, farming and food domains. How advanced the adoption of sound data management policies is in these life domains? How does that compare to other life sciences? In this presentation, we will talk about our practical experience, focusing on KnetMiner, a gene and molecular biology discovering platform, which is based on building and publishing knowledge graphs according to the FAIR principles, as well as using a mix of linked data standards for life sciences and recent graph database and API technologies. We will welcome questions and discussions from the audience about similar experience.
A few notes here about the UK Ontology Network Meeting (http://dream.inf.ed.ac.uk/events/ukont-13/2013_workshop_program.html) I've attended on 11/Apr/2013.
This social media marketing plan template outlines 10 steps to develop an effective social media strategy. It includes defining key issues and desired outcomes, gauging your existing online presence, conducting a SWOT analysis, determining online competition, setting SMART objectives, segmenting and targeting audiences, choosing appropriate social media tools, developing a press release, measuring performance, and ongoing monitoring and adjustments. The template provides guiding questions under each step to help structure the planning process.
El documento describe los procedimientos básicos para mover cargas unitarias, incluyendo levantar la carga debajo de ella, insertar un elemento de elevación dentro de la carga, prensar la carga entre dos superficies elevadoras, y suspender la carga. También describe tipos de cargas unitarias como patines, paletas, cargas sobre láminas o bastidores, cargas en recipientes, y cargas independientes. Finalmente, discute las relaciones entre cargas unitarias y otros factores como vehículos, recipientes para despacho, soport
myequivalents is a system to manage cross-references between entities that can be identified by pairs composed of a service name (e.g., EBI's ArrayExpress, Wikipedia) and an accession (e.g., E-MEXP-2514, Barack_Obama). For those familiar with the Semantic Web, we plan to support identification of entities via URIs and the owl:sameAs property. For those who already know MIRIAM and identifiers.org, myequivalents is more general than them and we plan to support these services in future.
Marco Brandizi and Keywan Hassani-Pak, Rothamsted Research, Invited Presentation at SWAT4HCLS 2022.
FAIR data principles are being a driving force in life sciences and other scientific domains, helping researchers to share their data and free all of their potential to integrate information and do novel discoveries. Knowledge graphs are an ever more popular paradigm to model data according to such principles, and technologies such as graph databases are emerging as complementary to approaches like linked data. All of this includes the agronomy, farming and food domains. How advanced the adoption of sound data management policies is in these life domains? How does that compare to other life sciences? In this presentation, we will talk about our practical experience, focusing on KnetMiner, a gene and molecular biology discovering platform, which is based on building and publishing knowledge graphs according to the FAIR principles, as well as using a mix of linked data standards for life sciences and recent graph database and API technologies. We will welcome questions and discussions from the audience about similar experience.
This document discusses using AgriSchemas and schema.org to model and share interoperable agricultural data from sources like KnetMiner and DFW for use cases involving molecular biology, gene expression, literature, and experiments. AgriSchemas provides a way to formally represent heterogeneous agricultural data to support exploratory research and data integration/sharing according to FAIR principles. Examples show how gene, publication, experiment and other data from KnetMiner are modeled and made accessible via AgriSchemas and linked data formats. Ongoing work focuses on additional areas like host-pathogen interactions, weather data, and dataset metadata.
This document discusses AgriSchemas, which are lightweight schemas based on schema.org and Bioschemas that allow for sharing agricultural data in a standardized, interoperable way. It provides examples of use cases modeled with AgriSchemas covering molecular biology, gene expression, ontology annotations, experiments, literature, and more. Ongoing work includes developing additional use cases and integrating real data from sources like KnetMiner, EBI, and GXA using reusable ETL tools. The goal is to make agricultural data more FAIR by adopting standardized schemas.
Sharing data with lightweight data standards, such as schema.org and bioschemas. The Knetminer case, an application for the agrifood domain and molecular biology.
Presented at Open Data Sicilia (#ODS2021)
How open data contribute to improving the world. The life science use case. The technical, social, ethical issues.
This was a talk given within the iGEM 2020 programme by the London Imperial College students group (https://2020.igem.org/Team:Imperial_College), in a webinar organised by the SOAPLab group on the topic of Ethics of Automation. Excellent Dr Brandon Sepulvado was the other speaker of the day.
This document discusses efforts to make agricultural data more interoperable using standards like AgriSchemas. It provides examples of existing agricultural data sources like experimental data from EBI GXA, molecular biology data from Knetminer, and host-pathogen interaction data from PHI-Base. It describes work to model Knetminer and GXA data according to AgriSchemas and provide public SPARQL access. The hackathon goals are to further review and develop AgriSchemas, consider additional use cases and data sources, start defining the AgriSchemas schema/ontology, and work on converting real datasets and applications.
This workshop aims at gathering together practioners of all levels and from a variety of research areas (agronomy, plant biology, food, life sciences etc) to compare best practices, points of views and projects about producing and consuming data in the agrifood field.
As it happens in general for digital data, the current trends in this arena include integration of "traditional" semantic-based approaches (eg, ontoloies, RDF-based linked data) with lightweight schemas (eg, Bioschemas/schema.org), use of JSON-based APIs, development of data lakes and knowledge graphs based on NoSQL technologies, graph databases based on property graphs (eg, Neo4j, TinkerPop/Gremlin).
Workshop participants will get an opportunity to discuss how those approaches and technologies are being used in the agrifood field, for the purpose or realising the FAIR data principles and make data sharing a powerful tool for research, industry or socio-economic investigation. In particular, we will propose an interactive session to outline the way participant-proposed datasets can be encoded through bioschemas or similar approaches.
- The document contains notes from several talks on topics related to agriculture, artificial intelligence, data integration, publishing and interoperability, metadata, and data annotation/enrichment.
- Key topics discussed include using semantic web technologies and linked data to integrate diverse agricultural data from CGIAR, building recommendation systems and data access platforms for field trial data, and developing taxonomies and vocabularies for sharing agricultural information.
- Other talks addressed using knowledge graph embeddings and linked open data to predict drug-drug interactions, annotating genomic datasets with ontologies, and classifying genetic data using disease ontologies.
- Presentations also provided overviews of tools for metadata annotation, data publishing and sharing through Wikidata,
Getting the best of Linked Data and Property Graphs: rdf2neo and the KnetMine...Rothamsted Research, UK
Graph-based modelling is becoming more popular, in the sciences and elsewhere, as a flexible and powerful way to exploit data to power world-changing digital applications. Com- pared to the initial vision of the Semantic Web, knowledge graphs and graph databases are be- coming a practical and computationally less formal way to manage graph data. On the other hand, linked data based on Semantic Web standards are a complementary, rather than alternative, ap- proach to deal with these data, since they still provide a common way to represent and exchange information. In this paper we introduce rdf2neo, a tool to populate Neo4j databases starting from RDF data sets, based on a configurable mapping between the two. By employing agrigenomics- related real use cases, we show how such mapping can allow for a hybrid approach to the man- agement of networked knowledge, based on taking advantage of the best of both RDF and prop- erty graphs.
This document discusses behind the scenes aspects of KnetMiner, including its use of the Ondex Integrator to combine data from multiple sources into a unified graph and the conversion of this graph to property graph and RDF formats. It provides examples of querying the KnetMiner data using Cypher and SPARQL and discusses some of the tradeoffs between property graph and RDF/triple store approaches. Exercises are included for users to try querying the KnetMiner data and representing biological concepts and relationships in RDF using the Bio-KNO ontology.
Some considerations on using the two systems to manage molecular biology knowledge networks. This comes from: https://github.com/marco-brandizi/odx_neo4j_converter_test
Towards FAIRer Biological Knowledge Networks Using a Hybrid Linked Data and...Rothamsted Research, UK
Presented at Integrative Bioinformatics Conference (IB2018, Harpenden, 2018).
We describe how to use Semantic Web Technologies and graph databases like Neo4j to serve life science data and address the FAIR data principles.
Behind the Scenes of KnetMiner: Towards Standardised and Interoperable Knowle...Rothamsted Research, UK
Workshop within the Integrative Bioinformatics Conference (IB2018, Harpenden, 2018).
We describe how to use Semantic Web Technologies and graph databases like Neo4j to serve life science data and address the FAIR data principles.
graph2tab, a library to convert experimental workflow graphs into tabular for...Rothamsted Research, UK
a generic implementation of a method for producing spreadsheets out of pipeline graphs See https://github.com/ISA-tools/graph2tab for details.
Presentation given to my group at EBI, on Feb 2, 2012.
Presentation on the EBI linked data, given at the SIB course on linked data for life science, Dec 2015.
This is a PDF version of the original presentation available on Prezi: http://tinyurl.com/ebirdfsib15
Building Linked Data for the EBI RDF Platform and biomedical samples: what we have learned and delivered during the Biomedbridges project. Original @ https://prezi.com/vxox0pgra6d7/biosd-linked-data-lessons-learned/
The document discusses the BioSamples Database (BioSD) and its conversion to linked data. BioSD aims to provide information about biological samples used in experiments in a centralized reference system. It was converted to linked data to allow for integration with other datasets, exploitation of ontologies, and improved searching. The conversion included changes to the data model and several improvements to the software. SPARQL queries are demonstrated to retrieve sample data and attributes. Potential new areas discussed include integrating geo-located samples with Google Maps and search by feature similarity.
The document discusses the BioSamples Database (BioSD), which provides a reference system for searching and browsing information about biological samples used in biomedical experiments. It focuses on the sample context independently of specific assay types or technologies. BioSD allows for consistency in sample annotations and common interfaces to access sample information and links to other data repositories. By modeling BioSD as linked data, it enables integration with related datasets, exploitation of ontologies for standardization, and enhanced modeling of sample attributes. This can support applications and new ways of querying the data using SPARQL.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
This document discusses using AgriSchemas and schema.org to model and share interoperable agricultural data from sources like KnetMiner and DFW for use cases involving molecular biology, gene expression, literature, and experiments. AgriSchemas provides a way to formally represent heterogeneous agricultural data to support exploratory research and data integration/sharing according to FAIR principles. Examples show how gene, publication, experiment and other data from KnetMiner are modeled and made accessible via AgriSchemas and linked data formats. Ongoing work focuses on additional areas like host-pathogen interactions, weather data, and dataset metadata.
This document discusses AgriSchemas, which are lightweight schemas based on schema.org and Bioschemas that allow for sharing agricultural data in a standardized, interoperable way. It provides examples of use cases modeled with AgriSchemas covering molecular biology, gene expression, ontology annotations, experiments, literature, and more. Ongoing work includes developing additional use cases and integrating real data from sources like KnetMiner, EBI, and GXA using reusable ETL tools. The goal is to make agricultural data more FAIR by adopting standardized schemas.
Sharing data with lightweight data standards, such as schema.org and bioschemas. The Knetminer case, an application for the agrifood domain and molecular biology.
Presented at Open Data Sicilia (#ODS2021)
How open data contribute to improving the world. The life science use case. The technical, social, ethical issues.
This was a talk given within the iGEM 2020 programme by the London Imperial College students group (https://2020.igem.org/Team:Imperial_College), in a webinar organised by the SOAPLab group on the topic of Ethics of Automation. Excellent Dr Brandon Sepulvado was the other speaker of the day.
This document discusses efforts to make agricultural data more interoperable using standards like AgriSchemas. It provides examples of existing agricultural data sources like experimental data from EBI GXA, molecular biology data from Knetminer, and host-pathogen interaction data from PHI-Base. It describes work to model Knetminer and GXA data according to AgriSchemas and provide public SPARQL access. The hackathon goals are to further review and develop AgriSchemas, consider additional use cases and data sources, start defining the AgriSchemas schema/ontology, and work on converting real datasets and applications.
This workshop aims at gathering together practioners of all levels and from a variety of research areas (agronomy, plant biology, food, life sciences etc) to compare best practices, points of views and projects about producing and consuming data in the agrifood field.
As it happens in general for digital data, the current trends in this arena include integration of "traditional" semantic-based approaches (eg, ontoloies, RDF-based linked data) with lightweight schemas (eg, Bioschemas/schema.org), use of JSON-based APIs, development of data lakes and knowledge graphs based on NoSQL technologies, graph databases based on property graphs (eg, Neo4j, TinkerPop/Gremlin).
Workshop participants will get an opportunity to discuss how those approaches and technologies are being used in the agrifood field, for the purpose or realising the FAIR data principles and make data sharing a powerful tool for research, industry or socio-economic investigation. In particular, we will propose an interactive session to outline the way participant-proposed datasets can be encoded through bioschemas or similar approaches.
- The document contains notes from several talks on topics related to agriculture, artificial intelligence, data integration, publishing and interoperability, metadata, and data annotation/enrichment.
- Key topics discussed include using semantic web technologies and linked data to integrate diverse agricultural data from CGIAR, building recommendation systems and data access platforms for field trial data, and developing taxonomies and vocabularies for sharing agricultural information.
- Other talks addressed using knowledge graph embeddings and linked open data to predict drug-drug interactions, annotating genomic datasets with ontologies, and classifying genetic data using disease ontologies.
- Presentations also provided overviews of tools for metadata annotation, data publishing and sharing through Wikidata,
Getting the best of Linked Data and Property Graphs: rdf2neo and the KnetMine...Rothamsted Research, UK
Graph-based modelling is becoming more popular, in the sciences and elsewhere, as a flexible and powerful way to exploit data to power world-changing digital applications. Com- pared to the initial vision of the Semantic Web, knowledge graphs and graph databases are be- coming a practical and computationally less formal way to manage graph data. On the other hand, linked data based on Semantic Web standards are a complementary, rather than alternative, ap- proach to deal with these data, since they still provide a common way to represent and exchange information. In this paper we introduce rdf2neo, a tool to populate Neo4j databases starting from RDF data sets, based on a configurable mapping between the two. By employing agrigenomics- related real use cases, we show how such mapping can allow for a hybrid approach to the man- agement of networked knowledge, based on taking advantage of the best of both RDF and prop- erty graphs.
This document discusses behind the scenes aspects of KnetMiner, including its use of the Ondex Integrator to combine data from multiple sources into a unified graph and the conversion of this graph to property graph and RDF formats. It provides examples of querying the KnetMiner data using Cypher and SPARQL and discusses some of the tradeoffs between property graph and RDF/triple store approaches. Exercises are included for users to try querying the KnetMiner data and representing biological concepts and relationships in RDF using the Bio-KNO ontology.
Some considerations on using the two systems to manage molecular biology knowledge networks. This comes from: https://github.com/marco-brandizi/odx_neo4j_converter_test
Towards FAIRer Biological Knowledge Networks Using a Hybrid Linked Data and...Rothamsted Research, UK
Presented at Integrative Bioinformatics Conference (IB2018, Harpenden, 2018).
We describe how to use Semantic Web Technologies and graph databases like Neo4j to serve life science data and address the FAIR data principles.
Behind the Scenes of KnetMiner: Towards Standardised and Interoperable Knowle...Rothamsted Research, UK
Workshop within the Integrative Bioinformatics Conference (IB2018, Harpenden, 2018).
We describe how to use Semantic Web Technologies and graph databases like Neo4j to serve life science data and address the FAIR data principles.
graph2tab, a library to convert experimental workflow graphs into tabular for...Rothamsted Research, UK
a generic implementation of a method for producing spreadsheets out of pipeline graphs See https://github.com/ISA-tools/graph2tab for details.
Presentation given to my group at EBI, on Feb 2, 2012.
Presentation on the EBI linked data, given at the SIB course on linked data for life science, Dec 2015.
This is a PDF version of the original presentation available on Prezi: http://tinyurl.com/ebirdfsib15
Building Linked Data for the EBI RDF Platform and biomedical samples: what we have learned and delivered during the Biomedbridges project. Original @ https://prezi.com/vxox0pgra6d7/biosd-linked-data-lessons-learned/
The document discusses the BioSamples Database (BioSD) and its conversion to linked data. BioSD aims to provide information about biological samples used in experiments in a centralized reference system. It was converted to linked data to allow for integration with other datasets, exploitation of ontologies, and improved searching. The conversion included changes to the data model and several improvements to the software. SPARQL queries are demonstrated to retrieve sample data and attributes. Potential new areas discussed include integrating geo-located samples with Google Maps and search by feature similarity.
The document discusses the BioSamples Database (BioSD), which provides a reference system for searching and browsing information about biological samples used in biomedical experiments. It focuses on the sample context independently of specific assay types or technologies. BioSD allows for consistency in sample annotations and common interfaces to access sample information and links to other data repositories. By modeling BioSD as linked data, it enables integration with related datasets, exploitation of ontologies for standardization, and enhanced modeling of sample attributes. This can support applications and new ways of querying the data using SPARQL.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."