The Venice Time Machine aims to digitally preserve and provide access to historical documents from Venice through various projects. These include digitizing documents using tomography, developing tools to extract text from images, modeling the information and relationships within documents, building an information system to allow searching across documents, linking documents to related scholarship to enrich the content, and creating digital experiences to promote research and teaching. The goal is to make the historical record of Venice available while ensuring long-term preservation, and to demonstrate the value of digital humanities approaches and tools.
#3 INTEROPERABLE covers: -- an overview of the 3 INTEROPERABLE principles which use vocabularies for knowledge representation, standardisation and references other metadata. -- resources to support institutional awareness and uptake of Interoperable principles
Speakers :
1)Keith Russell, ANDS, provides an overview of the key components of Interoperability
2) Simon Cox and Jonathan Yu (CSIRO) presented on how they have made the research data in the OzNome project Interoperable, not only for humans, but also for machines
Full YouTube recording: https://youtu.be/MeFl9WrtG20
Kalpa Gunaratna's Ph.D. dissertation defense: April 19 2017
The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored.
In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the highest ranked facts from these facets for the summary. The important and unique contribution of this approach is that because of its generation of facets, it adds diversity into entity summaries, making them comprehensive. For creating multiple entity summaries, we simultaneously process facts belonging to the given entities using combinatorial optimization techniques. In this process, we maximize diversity and importance of facts within each entity summary and relatedness of facts between the entity summaries. The proposed approach uniquely combines semantic expansion, graph-based relatedness, and combinatorial optimization techniques to generate relatedness-based multi-entity summaries.
Complementing the entity summarization approaches, we introduce a novel approach using light Natural Language Processing (NLP) techniques to enrich knowledge graphs by adding type semantics to literals.
Natural Language Processing for the Social Media
A PhD course at the University of Szeged, organised by the FuturICT.hu project; 2013. December 9-13.
1. Twitter intro + JSON structure
2. Challenges in analysing social media: why traditional NLP models do not work well
3. GATE for social media
#3 INTEROPERABLE covers: -- an overview of the 3 INTEROPERABLE principles which use vocabularies for knowledge representation, standardisation and references other metadata. -- resources to support institutional awareness and uptake of Interoperable principles
Speakers :
1)Keith Russell, ANDS, provides an overview of the key components of Interoperability
2) Simon Cox and Jonathan Yu (CSIRO) presented on how they have made the research data in the OzNome project Interoperable, not only for humans, but also for machines
Full YouTube recording: https://youtu.be/MeFl9WrtG20
Kalpa Gunaratna's Ph.D. dissertation defense: April 19 2017
The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored.
In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the highest ranked facts from these facets for the summary. The important and unique contribution of this approach is that because of its generation of facets, it adds diversity into entity summaries, making them comprehensive. For creating multiple entity summaries, we simultaneously process facts belonging to the given entities using combinatorial optimization techniques. In this process, we maximize diversity and importance of facts within each entity summary and relatedness of facts between the entity summaries. The proposed approach uniquely combines semantic expansion, graph-based relatedness, and combinatorial optimization techniques to generate relatedness-based multi-entity summaries.
Complementing the entity summarization approaches, we introduce a novel approach using light Natural Language Processing (NLP) techniques to enrich knowledge graphs by adding type semantics to literals.
Natural Language Processing for the Social Media
A PhD course at the University of Szeged, organised by the FuturICT.hu project; 2013. December 9-13.
1. Twitter intro + JSON structure
2. Challenges in analysing social media: why traditional NLP models do not work well
3. GATE for social media
Introduction for skills seminar on Search and Data Mining, Master of European...Gerben Zaagsma
These are the slides for the introductory lecture that I gave as part of a skills seminar on Search and Data Mining (Luxembourg, 11 December 2014). The slides are rather visual and for the most part don’t include notes, yet I believe the gist of the talk will be clear. At the end links are included for tools, further reading and a link to the exercises we did.
Conversational sensemaking Preece and Brainesdiannepatricia
Alun Preece from Cardiff University and Dave Braines from IBM presenting: "Conversational Sensemaking" on weekly Cognitive Systems Institute Speaker Series call, January 14, 2016.
Semantics for Bioinformatics: What, Why and How of Search, Integration and An...Amit Sheth
Amit Sheth's Keynote at Semantic Web Technologies for Science and Engineering Workshop (held in conjunction with ISWC2003), Sanibel Island, FL, October 20, 2003.
Objectives: 1. Gain an understanding of key trends in ICT innovation which are influencing/disrupting crisis informatics. 2. Be able to trace these trends through discussions later this semester, and understand their influence and potential. 3. Introduce visualization lab
Opening talk at the "Interdisciplinary Data Resources to Address the Challenges of Urban Living” Workshop at the Urban Big Data Centre, University of Glasgow, 4 April 2016
Keynote at ICSME 2017, Shanghai, China.
Title: The Elusive Nature of Software Documentation and Why Understanding How Knowledge Flows Matters
Abstract: Many developers consider writing documentation to be a painful and under-appreciated activity, yet the same developers often complain that a lack of documentation significantly hampers their work. Other developers argue that documentation is passé as developers more readily curate and exchange knowledge through networked platforms such as Slack, Twitter, and Stack Overflow. And while the savvy modern developer will know who to follow, who to ask, and where to look when they need software knowledge, finding the right knowledge at the right time remains a serious development bottleneck for many. Recognizing that these platforms contain golden nuggets of useful information, we see tremendous effort being directed at designing methods for capturing, mining, extracting, and distributing software knowledge, but will they succeed if we lack a good understanding of how knowledge flows in software development projects and communities? Through this talk, I will discuss the elusive nature of documentation and why I believe documentation will always be hard to define, capture, distribute, keep up to date, and to find, and I will argue that we should focus more on understanding, supporting, and amplifying knowledge flow in distributed software development.
Ostrom’s crypto-principles? Towards a commons-based approach for the use of B...David Rozas
Sildes from presentation at "Science, politics, activism and citizenship". Redes CTS & Catalan Society for the History of Science and Technology (Valencia, 31/05/2018).
Lecture to SIPA students on basics of creating data visualisations in multi-language, very-diverse-datasets developing-world / emerging-economy environments.
Keynote talk for NCRM Stream Analytics workshop, 19 January 2017, Manchester.
My talk is called "New and Emerging Forms of Data: Past, Present, and Future” and I will be giving a perspective from my role as one of the ESRC Strategic Advisers for Data Resources, in which I was responsible for new and emerging forms of data and realtime analytics. The talk also includes some of the current work in the Oxford e-Research Centre on Social Machines (the SOCIAM project) and an introduction to the PETRAS Internet of Things project.
The talk raises a number of important issues looking ahead, including massive scale of data that is already being supplied by Internet of Things, the implications of automation in our research, reproducibility and confidence in research results. I will also ask, how can the new forms of data and new research methods enable social scientists to work in new ways, and can we move on from the dependence on the traditional investment in longitudinal studies?
This presentation was held as a guest lecture on corpus linguistics at the University of Paderborn, Germany, on 8 November 2007. I'd like to thank my colleague Anette Rosenbach for inviting me as part of her "Web as Corpus" seminar.
Introduction for skills seminar on Search and Data Mining, Master of European...Gerben Zaagsma
These are the slides for the introductory lecture that I gave as part of a skills seminar on Search and Data Mining (Luxembourg, 11 December 2014). The slides are rather visual and for the most part don’t include notes, yet I believe the gist of the talk will be clear. At the end links are included for tools, further reading and a link to the exercises we did.
Conversational sensemaking Preece and Brainesdiannepatricia
Alun Preece from Cardiff University and Dave Braines from IBM presenting: "Conversational Sensemaking" on weekly Cognitive Systems Institute Speaker Series call, January 14, 2016.
Semantics for Bioinformatics: What, Why and How of Search, Integration and An...Amit Sheth
Amit Sheth's Keynote at Semantic Web Technologies for Science and Engineering Workshop (held in conjunction with ISWC2003), Sanibel Island, FL, October 20, 2003.
Objectives: 1. Gain an understanding of key trends in ICT innovation which are influencing/disrupting crisis informatics. 2. Be able to trace these trends through discussions later this semester, and understand their influence and potential. 3. Introduce visualization lab
Opening talk at the "Interdisciplinary Data Resources to Address the Challenges of Urban Living” Workshop at the Urban Big Data Centre, University of Glasgow, 4 April 2016
Keynote at ICSME 2017, Shanghai, China.
Title: The Elusive Nature of Software Documentation and Why Understanding How Knowledge Flows Matters
Abstract: Many developers consider writing documentation to be a painful and under-appreciated activity, yet the same developers often complain that a lack of documentation significantly hampers their work. Other developers argue that documentation is passé as developers more readily curate and exchange knowledge through networked platforms such as Slack, Twitter, and Stack Overflow. And while the savvy modern developer will know who to follow, who to ask, and where to look when they need software knowledge, finding the right knowledge at the right time remains a serious development bottleneck for many. Recognizing that these platforms contain golden nuggets of useful information, we see tremendous effort being directed at designing methods for capturing, mining, extracting, and distributing software knowledge, but will they succeed if we lack a good understanding of how knowledge flows in software development projects and communities? Through this talk, I will discuss the elusive nature of documentation and why I believe documentation will always be hard to define, capture, distribute, keep up to date, and to find, and I will argue that we should focus more on understanding, supporting, and amplifying knowledge flow in distributed software development.
Ostrom’s crypto-principles? Towards a commons-based approach for the use of B...David Rozas
Sildes from presentation at "Science, politics, activism and citizenship". Redes CTS & Catalan Society for the History of Science and Technology (Valencia, 31/05/2018).
Lecture to SIPA students on basics of creating data visualisations in multi-language, very-diverse-datasets developing-world / emerging-economy environments.
Keynote talk for NCRM Stream Analytics workshop, 19 January 2017, Manchester.
My talk is called "New and Emerging Forms of Data: Past, Present, and Future” and I will be giving a perspective from my role as one of the ESRC Strategic Advisers for Data Resources, in which I was responsible for new and emerging forms of data and realtime analytics. The talk also includes some of the current work in the Oxford e-Research Centre on Social Machines (the SOCIAM project) and an introduction to the PETRAS Internet of Things project.
The talk raises a number of important issues looking ahead, including massive scale of data that is already being supplied by Internet of Things, the implications of automation in our research, reproducibility and confidence in research results. I will also ask, how can the new forms of data and new research methods enable social scientists to work in new ways, and can we move on from the dependence on the traditional investment in longitudinal studies?
This presentation was held as a guest lecture on corpus linguistics at the University of Paderborn, Germany, on 8 November 2007. I'd like to thank my colleague Anette Rosenbach for inviting me as part of her "Web as Corpus" seminar.
Notes de bas de page: d’un outil savant aux hyperliensGiovanni Colavizza
Presentation pour l'exposition "De l'argile au nouage", Bibliotheques Mazarine et de Genève. 12/11/2015. http://institutions.ville-geneve.ch/fr/bge/actualites/actualites/expositions/archives-bastions/de-largile-au-nuage/.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
2. Who am I
Giovanni Colavizza
PhD student in Management of Technology
chair of Digital Humanities, EPFL
previously:
Computer Science, History, Archival and Library Sciences,
2 start-ups and some positions in IT and research.
3. Today
Venice Time Machine
1- Vision (where to go)
2- Pipeline (how) and Projects (what)
3- Methods and DH in context (or why, and how again)
6. Preservation
Digitisation and replication as a preservation strategy..
Quite complicated:
1- metadata (digital provenance)
2- replication protocols: IT infrastructure (centralised vs
distributed)
3- rights and partners’ needs (far away goal of open
access for public heritage)
9. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
12. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
15. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
16. Semi-automatic transcription
or the Big Data quest for script family resemblances
READ Horizon 2020 project: 8.2 million €, 7
partners, maximum peer reviewers’ score.
22. Opt. 2: Word spotting and Neural Networks
Andrea Mazzei ODOMA
Video pt. 2
23. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
27. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
29. Information system
Fabio Bortoluzzi EPFL
Not all documents are the same
in connecting to each other.
Fiscal declarations (for
taxation)
Personal acts (contracts,
testaments, etc.)
State machinery (office
holding)
34. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
35. Content enrichment
Linked Books Project
EPFL, Ca’ Foscari, Marciana
FNS funded
Approx. half of the citations in humanities are to primary
sources [Wiberley (2009)].
Their use has hardly ever been studied with citation analytic
methods.
Network effects: directly link scholarship with primary sources.
36. Content enrichment
• Primary and secondary sources
• Citation history (e.g. Google Scholar)
• Citation semantics
• Algorithmic History of the History of Venice
42. Pipeline illustrated by projects
1. Digitisation (Tomography)
2. Image processing (Pre-processing Suite)
3. Content extraction (Automatic transcription, READ Project)
4. Information modelling (Garzoni Project)
5. Building an information system (Document Viewer)
6. Content enrichment and network effects (Linked Books Project)
7. Valorisation and use (GIS, digital experiences, …)
47. VTM in the context of DH
1- The Big vs Small Data debate, or a proposal for
reframing
2- The quest for evidence of value, or overcoming the
DH drudgery conundrum
3- Humanities in the digital era, or why we need
historians more than ever ;)
48. VTM in the context of DH
The Big vs Small Data debate, or a proposal for
reframing
Big Data (for Humanities):
1- a matter of dimensions (in Tb or Pb)
2- networked, relational vs well-bounded (Kaplan 2015)
3- Telescope vs Microscope
“Data” are not big or small per se, but are so according to the
observer. Do I want to aggregate or disaggregate? Do I have
“larger” or “smaller” questions?
49. VTM in the context of DH
The Big vs Small Data debate, or a proposal for
reframing
Macro MicroMeso
50. VTM in the context of DH
The quest for evidence of value, or overcoming the DH
drudgery conundrum
Tool-building not an end in itself.
Developing tools to answer old questions should lead to
new questions and perspectives. The great quest in DH
now is for new arguments.
51. VTM in the context of DH
Humanities in the digital era, or why we need historians
more than ever ;)
“historians are fundamentally in the business of taking
complex, incomplete sources that are full of biases
and errors, and interpreting them critically to
develop an argument that answers a research
question. Digital sources do not change this.”
Ian Gregory
52. VTM in the context of DH
Humanities in the digital era, or why we need historians
more than ever ;)
“Data of different kinds must be
understood in their historical
relationship.”
Historians as critical arbiters of
information trained to work with time
(“comparative modelling of multiple
variables over time” in jargon).
53. A brief introduction to the
Venice Time Machine
Thank you
Giovanni Colavizza EPFL
“Computers are incredibly fast, accurate and stupid;
humans are incredibly slow, inaccurate and brilliant;
together they are powerful beyond imagination.”
Albert Einstein (or was it someone else??)