Digital History Projects as Boundary ObjectsMaxKemman
This document summarizes a study of incentives for participants in digital history projects. It finds that while the official incentive is to build new tools for historical research, the realities are different for each community of practice (CoP). The research CoP prioritizes their own thesis work over the tool. The technology CoP focuses on building interfaces rather than ensuring the tool is stable for research. And the tool CoP sees the project as a way to prove concepts and gain continued funding. As a result, the project functions more as a boundary object that coordinates these different incentives rather than achieving the goal of a finished tool for historians to use. There are open questions around whether historians can do big data analysis before tools are ready, and how
This is the presentation of the Juan Cruz-Benito’s PhD “On data-driven systems analyzing, supporting and enhancing users’ interaction and experience” that was defended on September 3rd, 2018 in the Faculty of Sciences at University of Salamanca Spain. This PhD was graded with the maximum qualification “Sobresaliente Cum Laude”.
Joe Corneli is a PhD student at the Knowledge Media Institute studying semantic adaptivity and social networking in personal learning environments. His research focuses on developing a unified methodology to map activity patterns in social contexts to better support the learning process. He plans to implement his ideas using tools like Etherpad for analyzing live social interactions, RDF for managing relationship data, and WordNet for clustering and annotating content to help simplify information and connect resources for learners. By the end of his PhD, he hopes to build a "PLE IDE" tool to offer personalized support for learners and developers.
Big Data and Data Mining - Lecture 3 in Introduction to Computational Social ...Lauri Eloranta
Third lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
The document discusses potential thesis topics in the area of human-computer interaction (HCI) and information visualization. Specifically, it mentions two potential topics: (1) investigating novel visualizations and user feedback approaches to improve user understanding and agency in recommendation systems; and (2) researching how linked open data can be merged and visualized to enable exploration of research-related information. It provides examples of relevant literature and systems to consider, as well as emphasizing the importance of evaluation.
Sima Das is a PhD candidate at Missouri University of Science and Technology studying computer science. She has published several papers on analyzing large scale networks and predicting future links and contacts. Her research interests include information diffusion, temporal centrality metrics, and their effects in predicting mobility in dynamic networks. She has experience working on projects related to secure sensor cloud computing, green computing, wireless sensor networks, and mobile data management.
Digital History Projects as Boundary ObjectsMaxKemman
This document summarizes a study of incentives for participants in digital history projects. It finds that while the official incentive is to build new tools for historical research, the realities are different for each community of practice (CoP). The research CoP prioritizes their own thesis work over the tool. The technology CoP focuses on building interfaces rather than ensuring the tool is stable for research. And the tool CoP sees the project as a way to prove concepts and gain continued funding. As a result, the project functions more as a boundary object that coordinates these different incentives rather than achieving the goal of a finished tool for historians to use. There are open questions around whether historians can do big data analysis before tools are ready, and how
This is the presentation of the Juan Cruz-Benito’s PhD “On data-driven systems analyzing, supporting and enhancing users’ interaction and experience” that was defended on September 3rd, 2018 in the Faculty of Sciences at University of Salamanca Spain. This PhD was graded with the maximum qualification “Sobresaliente Cum Laude”.
Joe Corneli is a PhD student at the Knowledge Media Institute studying semantic adaptivity and social networking in personal learning environments. His research focuses on developing a unified methodology to map activity patterns in social contexts to better support the learning process. He plans to implement his ideas using tools like Etherpad for analyzing live social interactions, RDF for managing relationship data, and WordNet for clustering and annotating content to help simplify information and connect resources for learners. By the end of his PhD, he hopes to build a "PLE IDE" tool to offer personalized support for learners and developers.
Big Data and Data Mining - Lecture 3 in Introduction to Computational Social ...Lauri Eloranta
Third lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
The document discusses potential thesis topics in the area of human-computer interaction (HCI) and information visualization. Specifically, it mentions two potential topics: (1) investigating novel visualizations and user feedback approaches to improve user understanding and agency in recommendation systems; and (2) researching how linked open data can be merged and visualized to enable exploration of research-related information. It provides examples of relevant literature and systems to consider, as well as emphasizing the importance of evaluation.
Sima Das is a PhD candidate at Missouri University of Science and Technology studying computer science. She has published several papers on analyzing large scale networks and predicting future links and contacts. Her research interests include information diffusion, temporal centrality metrics, and their effects in predicting mobility in dynamic networks. She has experience working on projects related to secure sensor cloud computing, green computing, wireless sensor networks, and mobile data management.
Ethical and Legal Issues in Computational Social Science - Lecture 7 in Intro...Lauri Eloranta
Seventh lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
Measuring Sophistication in Systemic Design and ComputingRSD7 Symposium
This document discusses measuring sophistication in interdisciplinary fields like systems design and computing. It presents a model for conceptualizing knowledge across three disciplines and their intersections. Sophistication is envisioned as progressive understanding and skills in a domain. The document also discusses criticisms of this view and proposes measuring total sophistication, depth of specialization, and focus across disciplines using geometric concepts. Learning progressions are presented as a way to track student development over time.
Malaysia keynote "Ubiquitous Computing and Online Collaboration for Open Educ...Steve McCarty
"Ubiquitous Computing and Online Collaboration for Open Education." Keynote Address at the 5th International Malaysian Educational Technology Convention, Kuantan, Malaysia (17 October 2011).
This certificate recognizes the successful completion of an online program on tackling big data challenges from November 4 to December 16, 2014. The 20-hour program was developed by MIT's Computer Science and Artificial Intelligence Laboratory in collaboration with MIT Professional Education and edX. The certificate is signed by the executive director of MIT Professional Education and the directors of the Computer Science and Artificial Intelligence Laboratory and Big Data Initiative.
This document outlines an upcoming workshop on distributed cognition in socio-technical systems. The workshop will:
1) Explain the processes distributed cognition models and discuss critical issues.
2) Have groups use the distributed cognition model to elaborate on the services needed for future socio-technical systems like smart cities, artificially enhanced societies, and sustainable schools.
3) Discuss what HCI methods can collect data to inform distributed cognition research and what data flows need to be created.
Provocation talk given by David De Roure at the e-Science Institute, Edinburgh, 19 November 2007 as part of the ESI Strategic Advisory Board Workshop or "e-Science Think Tank"
Perspectives on Systemic Design: examining heterogeneous relevant literature ...RSD7 Symposium
This document provides an overview of the authors' planned review of the historical influences and development of systemic design. They intend to take a narrative approach, mapping the trajectory of influential ideas over time and exploring related themes that are "entangled" with systemic design. Their goals are to better understand and situate these influences to help advance the field. They will use systems thinking concepts like "entanglement" and rich pictures to frame their interpretations. The review aims to identify where systemic design is assumed but not acknowledged, to answer emerging questions, and suggest future research avenues at the intersection of disciplines.
Uma visão geral sobre Reality Mining e pesquisas que foram e estão sendo desenvolvidas neste contexto. O conteúdo dos slides foram extraídos dos estudos e experimentos do MIT Media Lab (http://hd.media.mit.edu/) dirigido pelo Prof. Alex Pentland
2009-C&T-NodeXL and social queries - a social media network analysis toolkitMarc Smith
This document introduces NodeXL, a network analysis toolkit implemented as an Excel add-in. NodeXL allows users to import social network data, calculate network metrics, and generate network graphs and visualizations within Excel. The document outlines NodeXL's key features, including importing data from sources like email and Twitter, calculating metrics like degree and centrality, and generating customizable node-link diagrams. It also discusses related work and provides an example analysis workflow using NodeXL to analyze an enterprise social network, revealing patterns in employee connections. NodeXL aims to make network analysis accessible to novice and expert users through a familiar spreadsheet interface.
Interface Design - an overview on recent findings in HCI research and examples of interfaces created by WebFoo Interface Division.
This slideshow was presented by our Creative Director, Mihai Varga, at a guest lecture at Surrey University in March 2014.
Computational models are increasingly being used to address complex sustainability challenges. Three sentences:
1) Computational techniques like system dynamics, agent-based modeling, and network analysis can help designers simulate social systems and prioritize interventions or stakeholder engagement for issues like plastic waste or sustainable industries.
2) However, modeling social systems raises questions around modeling human behavior, integrating modeling into design processes, and developing models with limited data.
3) Case studies are proposed to demonstrate how computational modeling could help redesign markets for material reuse and mental healthcare systems by simulating ecosystems and identifying sources of stagnation.
Dartmouth discussion: What's wrong with "What's wrong with CHAT?"?Clay Spinuzzi
My slide deck from the workshop portion of the 2016 Dartmouth Institute. During this portion, respondents and I discussed my paper "What's Wrong with CHAT?" This deck encapsulates the argument, but also briefly introduces activity theory and discusses its development.
Presentation given at the HEA Social Sciences learning and teaching summit 'Exploring the implications of ‘the era of big data’ for learning and teaching'.
A blog post outlining the issues discussed at the summit is available via: http://bit.ly/1lCBUIB
A Method to Select e-Infrastructure Components to SustainDaniel S. Katz
This is a talk presented at International Symposium on Grids and Clouds (ISGC), Taipei, Taiwan, March 20, 2015.
Abstract:
Reusable infrastructure (systems created by one or more people and intended to be used by other people) has become essential for many types of research over the last century, from microscopes to telescopes, and from sequencers to colliders. Over the past few decades, much research infrastructure has become digital, and many new digital systems have been developed. Here we discuss e-Research infrastructure (also called cyberinfrastructure), which has been defined by Craig Stewart as consisting of “... computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high performance networks to improve research productivity and enable breakthroughs not otherwise possible.” While the research infrastructure as a whole is important, it is useful to consider infrastructure elements as well, as they comprise the overall infrastructure. Each element has a technical context (which allows one to ask questions about its architecture, such as: How does it fit into the overall infrastructure? How does it interact with other infrastructure elements?), a social context (which allows one to ask questions about its developers, such as: Who has developed the element?, and it users, such as: Who uses the element?, and its purpose, such as: What is the intended use of the element?), and a political context (which allows one to ask questions about its funders, such as: Who funds the development and maintenance?, and about its political scope, such as: Is the element national? International?). Understanding how a particular infrastructure element can be created and sustained requires answering two pairs of questions: What resources are needed to create it, and how can those resources be assembled and applied? What resources are needed to sustain it, and how can those resources be assembled and applied? In this paper, we focus on the second half of the two questions, since the amount and type of needed resources vary with the specific element being discussed. We believe elements of e-Research infrastructure can be placed in a three-dimensional space, consisting of temporal duration, spatial extent, and purpose. Note that the number of users of a given element should be larger the farther the element is from the origin in any direction, as should the cost. These two elements (number of users and cost) can be generically called ‘scale’ in this context. Alternatively, we can attempt to map impact, rather than usage, as an element of scale. In either case, scale is thus a metric of the space, though it is not orthogonal to any of the three axes. This talk with discuss how placing potential elements in this space allows decisions to be made on which elements should be pursued.
Finding the emic in systemic design: Towards systemic ethnographyRSD7 Symposium
This document discusses emic and etic perspectives in systemic design. It proposes a framework for assessing how emic, or insider, a given systemic design project is. The framework includes principles like prolonged engagement, triangulation, and member checking. The document applies this framework to analyze two case studies - a public procurement project at the University of Toronto and a national youth leadership summit in Canada. It finds the framework useful but notes limitations like case selection bias. The document concludes by discussing contributions, limitations, and opportunities for refining and field testing the framework.
The document discusses the semantic web and how it uses standards like SCORM to organize online educational content and learning objects. SCORM allows content to be aggregated, launched, sequenced and learner progress tracked. Learning objects can be any digital resources like images, documents or courses that can be reused, repurposed or adapted. Communities of practice are groups that share knowledge over time through collaboration. Trends in online learning include value of investment over return on investment and generating learning objects through communities of practice.
Causal networks, learning and inference - IntroductionFabio Stella
This document discusses causal inference and its importance in understanding data. It introduces Simpson's Paradox, where data can show one relationship at an aggregate level but the opposite relationship when separated into subgroups. Specifically, an example is given where a drug appears to lower recovery rates overall but increases recovery rates for both men and women when analyzed separately. This highlights that causation cannot be determined by statistics or machine learning alone and separating data into relevant subgroups is important for causal analysis. Understanding causation is crucial for guiding decisions and policies based on data.
MIT Program on Information Science Talk -- Julia Flanders on Jobs, Roles, Ski...Micah Altman
Julia Flanders, who is the Director of the Digital Scholarship Group in the Northeastern University Library, and a Professor of Practice in Northeastern's English Department gave a talk on Jobs, Roles, Skills, Tools: Working in the Digital Academy as part of the Program on Information Science Brown Bag Series.
In the talk, illustrated by the slides below, Julia discusses the evolving landscape of digital humanities (and digital scholarship more broadly) and considers the relationship between technology, tool development, and professional roles.
For more see: http://informatics.mit.edu/event/brown-bag-jobs-roles-skills-tools-working-digital-academy-julia-flanders
Digital History Projects as Boundary ObjectsMaxKemman
Digital history projects can act as boundary objects that coordinate different incentives and allow participation across disciplines. They bring together researchers with different goals, like tool developers focused on technology, historians interested in research, and others focused on building the tool. While the official goal is to build a new tool for historical research, in reality the tool may not be stable enough for research in the project timeframe. This leads the different communities of practice within a project to shape it individually according to their own needs and incentives. Digital history projects require coordination to manage risks and expectations when incentives do not directly align with the overall goals of the project.
Ethical and Legal Issues in Computational Social Science - Lecture 7 in Intro...Lauri Eloranta
Seventh lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
Measuring Sophistication in Systemic Design and ComputingRSD7 Symposium
This document discusses measuring sophistication in interdisciplinary fields like systems design and computing. It presents a model for conceptualizing knowledge across three disciplines and their intersections. Sophistication is envisioned as progressive understanding and skills in a domain. The document also discusses criticisms of this view and proposes measuring total sophistication, depth of specialization, and focus across disciplines using geometric concepts. Learning progressions are presented as a way to track student development over time.
Malaysia keynote "Ubiquitous Computing and Online Collaboration for Open Educ...Steve McCarty
"Ubiquitous Computing and Online Collaboration for Open Education." Keynote Address at the 5th International Malaysian Educational Technology Convention, Kuantan, Malaysia (17 October 2011).
This certificate recognizes the successful completion of an online program on tackling big data challenges from November 4 to December 16, 2014. The 20-hour program was developed by MIT's Computer Science and Artificial Intelligence Laboratory in collaboration with MIT Professional Education and edX. The certificate is signed by the executive director of MIT Professional Education and the directors of the Computer Science and Artificial Intelligence Laboratory and Big Data Initiative.
This document outlines an upcoming workshop on distributed cognition in socio-technical systems. The workshop will:
1) Explain the processes distributed cognition models and discuss critical issues.
2) Have groups use the distributed cognition model to elaborate on the services needed for future socio-technical systems like smart cities, artificially enhanced societies, and sustainable schools.
3) Discuss what HCI methods can collect data to inform distributed cognition research and what data flows need to be created.
Provocation talk given by David De Roure at the e-Science Institute, Edinburgh, 19 November 2007 as part of the ESI Strategic Advisory Board Workshop or "e-Science Think Tank"
Perspectives on Systemic Design: examining heterogeneous relevant literature ...RSD7 Symposium
This document provides an overview of the authors' planned review of the historical influences and development of systemic design. They intend to take a narrative approach, mapping the trajectory of influential ideas over time and exploring related themes that are "entangled" with systemic design. Their goals are to better understand and situate these influences to help advance the field. They will use systems thinking concepts like "entanglement" and rich pictures to frame their interpretations. The review aims to identify where systemic design is assumed but not acknowledged, to answer emerging questions, and suggest future research avenues at the intersection of disciplines.
Uma visão geral sobre Reality Mining e pesquisas que foram e estão sendo desenvolvidas neste contexto. O conteúdo dos slides foram extraídos dos estudos e experimentos do MIT Media Lab (http://hd.media.mit.edu/) dirigido pelo Prof. Alex Pentland
2009-C&T-NodeXL and social queries - a social media network analysis toolkitMarc Smith
This document introduces NodeXL, a network analysis toolkit implemented as an Excel add-in. NodeXL allows users to import social network data, calculate network metrics, and generate network graphs and visualizations within Excel. The document outlines NodeXL's key features, including importing data from sources like email and Twitter, calculating metrics like degree and centrality, and generating customizable node-link diagrams. It also discusses related work and provides an example analysis workflow using NodeXL to analyze an enterprise social network, revealing patterns in employee connections. NodeXL aims to make network analysis accessible to novice and expert users through a familiar spreadsheet interface.
Interface Design - an overview on recent findings in HCI research and examples of interfaces created by WebFoo Interface Division.
This slideshow was presented by our Creative Director, Mihai Varga, at a guest lecture at Surrey University in March 2014.
Computational models are increasingly being used to address complex sustainability challenges. Three sentences:
1) Computational techniques like system dynamics, agent-based modeling, and network analysis can help designers simulate social systems and prioritize interventions or stakeholder engagement for issues like plastic waste or sustainable industries.
2) However, modeling social systems raises questions around modeling human behavior, integrating modeling into design processes, and developing models with limited data.
3) Case studies are proposed to demonstrate how computational modeling could help redesign markets for material reuse and mental healthcare systems by simulating ecosystems and identifying sources of stagnation.
Dartmouth discussion: What's wrong with "What's wrong with CHAT?"?Clay Spinuzzi
My slide deck from the workshop portion of the 2016 Dartmouth Institute. During this portion, respondents and I discussed my paper "What's Wrong with CHAT?" This deck encapsulates the argument, but also briefly introduces activity theory and discusses its development.
Presentation given at the HEA Social Sciences learning and teaching summit 'Exploring the implications of ‘the era of big data’ for learning and teaching'.
A blog post outlining the issues discussed at the summit is available via: http://bit.ly/1lCBUIB
A Method to Select e-Infrastructure Components to SustainDaniel S. Katz
This is a talk presented at International Symposium on Grids and Clouds (ISGC), Taipei, Taiwan, March 20, 2015.
Abstract:
Reusable infrastructure (systems created by one or more people and intended to be used by other people) has become essential for many types of research over the last century, from microscopes to telescopes, and from sequencers to colliders. Over the past few decades, much research infrastructure has become digital, and many new digital systems have been developed. Here we discuss e-Research infrastructure (also called cyberinfrastructure), which has been defined by Craig Stewart as consisting of “... computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high performance networks to improve research productivity and enable breakthroughs not otherwise possible.” While the research infrastructure as a whole is important, it is useful to consider infrastructure elements as well, as they comprise the overall infrastructure. Each element has a technical context (which allows one to ask questions about its architecture, such as: How does it fit into the overall infrastructure? How does it interact with other infrastructure elements?), a social context (which allows one to ask questions about its developers, such as: Who has developed the element?, and it users, such as: Who uses the element?, and its purpose, such as: What is the intended use of the element?), and a political context (which allows one to ask questions about its funders, such as: Who funds the development and maintenance?, and about its political scope, such as: Is the element national? International?). Understanding how a particular infrastructure element can be created and sustained requires answering two pairs of questions: What resources are needed to create it, and how can those resources be assembled and applied? What resources are needed to sustain it, and how can those resources be assembled and applied? In this paper, we focus on the second half of the two questions, since the amount and type of needed resources vary with the specific element being discussed. We believe elements of e-Research infrastructure can be placed in a three-dimensional space, consisting of temporal duration, spatial extent, and purpose. Note that the number of users of a given element should be larger the farther the element is from the origin in any direction, as should the cost. These two elements (number of users and cost) can be generically called ‘scale’ in this context. Alternatively, we can attempt to map impact, rather than usage, as an element of scale. In either case, scale is thus a metric of the space, though it is not orthogonal to any of the three axes. This talk with discuss how placing potential elements in this space allows decisions to be made on which elements should be pursued.
Finding the emic in systemic design: Towards systemic ethnographyRSD7 Symposium
This document discusses emic and etic perspectives in systemic design. It proposes a framework for assessing how emic, or insider, a given systemic design project is. The framework includes principles like prolonged engagement, triangulation, and member checking. The document applies this framework to analyze two case studies - a public procurement project at the University of Toronto and a national youth leadership summit in Canada. It finds the framework useful but notes limitations like case selection bias. The document concludes by discussing contributions, limitations, and opportunities for refining and field testing the framework.
The document discusses the semantic web and how it uses standards like SCORM to organize online educational content and learning objects. SCORM allows content to be aggregated, launched, sequenced and learner progress tracked. Learning objects can be any digital resources like images, documents or courses that can be reused, repurposed or adapted. Communities of practice are groups that share knowledge over time through collaboration. Trends in online learning include value of investment over return on investment and generating learning objects through communities of practice.
Causal networks, learning and inference - IntroductionFabio Stella
This document discusses causal inference and its importance in understanding data. It introduces Simpson's Paradox, where data can show one relationship at an aggregate level but the opposite relationship when separated into subgroups. Specifically, an example is given where a drug appears to lower recovery rates overall but increases recovery rates for both men and women when analyzed separately. This highlights that causation cannot be determined by statistics or machine learning alone and separating data into relevant subgroups is important for causal analysis. Understanding causation is crucial for guiding decisions and policies based on data.
MIT Program on Information Science Talk -- Julia Flanders on Jobs, Roles, Ski...Micah Altman
Julia Flanders, who is the Director of the Digital Scholarship Group in the Northeastern University Library, and a Professor of Practice in Northeastern's English Department gave a talk on Jobs, Roles, Skills, Tools: Working in the Digital Academy as part of the Program on Information Science Brown Bag Series.
In the talk, illustrated by the slides below, Julia discusses the evolving landscape of digital humanities (and digital scholarship more broadly) and considers the relationship between technology, tool development, and professional roles.
For more see: http://informatics.mit.edu/event/brown-bag-jobs-roles-skills-tools-working-digital-academy-julia-flanders
Digital History Projects as Boundary ObjectsMaxKemman
Digital history projects can act as boundary objects that coordinate different incentives and allow participation across disciplines. They bring together researchers with different goals, like tool developers focused on technology, historians interested in research, and others focused on building the tool. While the official goal is to build a new tool for historical research, in reality the tool may not be stable enough for research in the project timeframe. This leads the different communities of practice within a project to shape it individually according to their own needs and incentives. Digital history projects require coordination to manage risks and expectations when incentives do not directly align with the overall goals of the project.
Presentation given at The Ends of the Humanities (10-13 September 2017, Belval, Luxembourg)
Abstract: http://www.maxkemman.nl/2017/09/abstract-interdisciplinary-ignorance/
The trek towards sustainability - truth, tale, or transition?Birgit Penzenstadler
This is the keynote given at the European Conference of Software Architecture, ECSA 2023, in Istanbul, Turkey. You may have heard about sustainability and the Sustainable Development Goals and the Global Reporting Initiative that now requires bigger companies to adjust their reporting in order to increase transparency. At the same time, you may have a funky feeling that there’s quite a bit of hot air and greenwashing going on around there. So how do we truly transition towards more sustainability? Why may we also want to think about more resilience? And what inner transition is required to make this big outer shift? In this talk, I give a brief (necessarily incomplete) overview of the last decade of sustainability research in and outside of software engineering and sketch a vision of what’s to come if we truly embrace a transition, and what may happen if we don’t.
Poster presentation on IA in Wikipedia Poster at the 2015 IA Summit in Minneapolis. Companion to IA Wikipedia Edit-a-Thon.
Full bibliography available at:
Books: https://docs.google.com/document/d/1BbzaObS6gLe1VhUknLqRcU5DgOrBzakmbxOfN-8Yyp0/edit?usp=docslist_api
Articles and Proceedings: https://docs.google.com/document/d/1YZMpHnH7mWtQ52qnjzBAZ-O7jy9nDRWJGPbNT4S7zCE/edit?usp=docslist_api
Big data and Digital Transformations in the HumanitiesMartin Wynne
The document discusses the opportunities and challenges of digital humanities research using large datasets. It outlines how new infrastructure initiatives have lowered barriers to digital research but that interoperability, sharing, and sustainability of resources remain difficult. The humanities risk becoming less relevant if new forms of data-driven research are not embraced, but care must be taken to avoid an overly empirical view that diminishes qualitative analysis. Achieving provisional standards and categories could promote shared infrastructure while still allowing traditional humanities criticism.
1. The document discusses using a Hybrid Social Learning Network (HSLN) to explore concepts, practices, designs, and smart services for networked professional learning. A HSLN combines formal and informal social structures through a "50-50 partnership" between people and machines.
2. Examples of social machines discussed include a tweet that led to an open source virtual organism project, the Reading the Riots analysis of social media during the 2011 London riots, and the Zooniverse citizen science platform. Smart services like Confer and KnowBrian were co-designed with UK health sector workers to support their professional learning.
3. Future work involves evaluating the impact of tools like Confer on professional learning and generalizing design
The document summarizes a presentation given by Musstanser Tinauli on their research activities and experiments. It discusses their goals of understanding how interactive environments can be measured and how tools influence user behavior. It describes ongoing case studies of games, e-learning platforms and digital pens. It outlines their methodological approach and provides results from studies on a digital pen and paper system, including lessons learned. Recent publications and collaborations are also mentioned.
The document summarizes efforts to support digital humanities research through collaboration at various institutions. It describes projects at Wheaton College involving students encoding a text using TEI XML under faculty supervision. It also discusses initiatives at the University of Vermont and Brown University to provide infrastructure and expertise for digital scholarship through partnerships between libraries, academic technology groups, and faculty researchers.
The document summarizes a presentation on linking information literacy and digital literacy in teaching. It discusses using AI tools like ChatGPT in a plagiarism workshop to make digital literacy aspects more explicit. The presentation defines information literacy and digital literacy, examines frameworks that link the two literacies, and provides an example workshop exploring how AI tools fit within definitions of plagiarism and scientific integrity. It encourages viewing the literacies as complementary and making digital aspects explicit as an initial step in education. The document concludes by inviting audience feedback on experimenting with AI tools.
Leading research in technoscience institutttseminar-281010NTNU
The document discusses how anthropology can lead research in technoscience using a phronetic approach. It presents three case studies where the author took on roles like facilitator, ethnographer and IT developer. Through these cases, the author applied concepts from actor-network theory like translation, inscription and punctualization to understand how technologies and organizations shape one another. The cases helped address questions about where developments are going, their desirability and how to inform practice using practical and value-rational approaches like phronesis.
Data Science: History repeated? – The heritage of the Free and Open Source GI...Peter Löwe
This document discusses the history and lessons that can be learned from the development of geographic information systems (GIS) and how they relate to the emerging field of data science. It argues that data science may follow a similar path to GIS, and outlines several lessons: (1) the importance of standardization, (2) the benefits of free and open source software in enabling analysis, education and improvement, and (3) the value of communities organized around open science principles of sharing and reuse. It highlights the Open Source Geospatial Foundation as an example of an "umbrella organization" that has supported collaborative development through established best practices around governance, software quality and merit-based participation.
This document discusses how information seeking has changed with new technologies and the importance of libraries adapting to remain relevant. It covers several key points:
1) The rise of digital information has created new challenges for information seekers to evaluate and make sense of vast amounts of data.
2) Libraries must help patrons navigate this environment by facilitating understanding, problem-solving and decision making.
3) Emerging technologies like mobile access, eReaders, social networking and cloud storage are shifting how users interact with information and each other.
4) Libraries are exploring new tools and platforms like apps, tutorials and social media to engage patrons wherever they are accessing information.
Introduction to Computational Social Science - Lecture 1Lauri Eloranta
1. The document provides an overview of the first lecture in an Introduction to Computational Social Science course. 2. It defines computational social science and discusses its main areas which include big data, social networks, social complexity, and simulation. 3. The lecture also explores some examples of computational social science research such as modeling the spread of disease, tracking news and meme propagation online, and simulating water demand.
Digital data is increasingly being used to track and analyze human activities like work, learning, and living. This document discusses how the "datafication" of these areas is redistributing responsibilities between humans and algorithms. It explores issues around accountability, control, and transparency when important decisions are made based on data. The author advocates developing new "literacies" to ensure data practices align with public interests and values, and calls for a posthuman perspective that sees humans and technology as deeply entangled.
This document provides an overview of digital humanities (DH), including definitions, a brief history, tools used in DH, and examples of DH projects and centers. DH is defined as using computational tools and methods to expand humanities research and communication. It has evolved from humanities computing beginning in the 1960s. Libraries play a key role in DH through activities like digitization, curation, and providing tools and space for DH work. The document discusses several DH tools and projects in South Africa and worldwide as illustrations.
This document summarizes the findings of a survey on boundary practices in digital humanities collaborations. The survey found that digital humanities collaborations often involve more humanities researchers than computational researchers, and are led primarily by those from the humanities. Additionally, most collaborators work in separate buildings and communicate remotely, rather than meeting in person as commonly assumed. The frequency of disciplinary communication was higher than interdisciplinary communication, suggesting scholars remain aligned with their own disciplines rather than developing common ground across disciplines as collaborations assume. Overall, the realities of digital humanities collaborations diverge from assumptions of equal participation, shared physical space, and development of interdisciplinary identities.
User Required? On the Value of User Research in the Digital HumanitiesMaxKemman
This document discusses the value of user research in developing digital tools for humanities research. It describes user research conducted for two tools: PoliMedia, which links Dutch parliamentary debates to media items, and Oral History Today, a search interface for oral histories. The research identified user requirements for both tools, though some requirements were deemed out of scope. Common requirements included searching by time period and names/roles of people. The discussion concludes that while generalizing requirements is difficult, user research helps ensure tools are usable and support researchers' broader workflows.
Too Many Varied User Requirements for Digital Humanities ProjectsMaxKemman
The document discusses two digital humanities projects that developed tools for scholars: PoliMedia and Oral History Today. User requirements were collected from scholars through interviews and evaluations. For both projects, there was a small overlap between user requirements and the project goals. Many requirements were deemed out of scope. This suggests that while scholars have clear ideas for their own research, their tool requirements are too varied for single projects to address. The conclusion is that repurposing data and tools in new ways may better meet scholars' diverse needs.
Talking With Scholars - Developing a Research Environment for Oral History Co...MaxKemman
Max Kemman discusses developing a research environment for oral history collections. He outlines four stages of research that scholars may go through: exploration and selection of collections, exploration and investigation of materials, presentation of results, and data curation. The system was evaluated in multiple cycles with scholars and is meant to provide search, filtering, bookmarking, and sharing capabilities for oral history collections.
Oral History Today - Search Interface for Oral History Research
Presented at CLARIAH meeting 11 September 2013 by Roeland Ordelman (NISV) and Max Kemman (EUR)
Slides in Dutch, slide notes in English
Building the PoliMedia search system; data- and user-drivenMaxKemman
Presentation at eHumanities group at Meerten's Institute (Amsterdam) on Thursday 18 April 2013.
Analysing media coverage across several types of media-outlets is a challenging task for (media) historians. A specific example of media coverage research investigates the coverage of political debates and how the representation of topics and people change over time. The PoliMedia project (http://www.polimedia.nl) aims to showcase the potential of cross-media analysis for research in the humanities, by 1) curating automatically detected semantic links between four data sets of different media types, and 2) developing a demonstrator application that allows researchers to deploy such an interlinked collection for quantitative and qualitative analysis of media coverage of debates in the Dutch parliament.
These two goals reflect the two perspectives on the development of a search system such as PoliMedia; data- and user-driven. In this presentation, Laura Hollink (VU) will present the data-driven perspective of linking between different datasets and the research questions that arise in achieving this linkage: how to combine different types of datasets and what kind of research questions are made possible by the data? Max Kemman (EUR) will present the user-driven perspective: which benefits can scholars have from linking of these datasets? What are the user requirements for the PoliMedia search system and how was the system evaluated with scholars in an eye tracking study?
User research for the development of search systemsMaxKemman
Presentation at Erasmus University Library 11-12-2012.
For the most part a combination of slides from previous presentations, mostly from http://www.slideshare.net/MaxKemman/mapping-the-use-of-digital-sources-amongst-humanities-scholars-in-the-netherlands
Mapping the use of digital sources amongst Humanities scholars in the Netherl...MaxKemman
1) The document reports on a survey of 294 Dutch and Belgian academics regarding their use of digital sources and databases.
2) It finds that text is the most commonly used digital medium, and Google is the dominant search tool and platform. Younger academics are more confident in using audiovisual search tools.
3) Disciplines like history and literature most commonly use images and digitized objects, while fields like social studies and linguistics make more use of video, audio, and statistical data.
4) The study has implications for how to increase awareness, appeal and adoption of digital humanities approaches through user-focused design and inclusion in education.
1) The PoliMedia project aims to link multimedia sources like newspaper articles and radio bulletins to discussions in the Dutch parliament to allow for better analysis of how media covered political debates.
2) It extracts structure and named entities from parliamentary debates and uses dates, topics, entities and speakers to automatically query media archives.
3) The current approach links debates to media coverage within a one-month period by searching archives for mentions of entities from each debate. This allows insight into how different media portrayed the same political discussions and events.
The document discusses a research project called PoliMedia that aims to analyze media coverage of political debates in the Dutch parliament from 1956 to 1995. The project will link multimedia sources like newspapers, television, and radio to provide insight into how different media covered topics and people over time. By connecting these sources through a portal, researchers can more easily browse and search debates and gain a better understanding of the relationships between media items. The project seeks collaboration to build structured datasets and a virtual workspace to support academic research.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
2. Digital history
An ethnographic study of digital history
collaborations in the Benelux
Preliminary results for today:
Interviews
2 case studies (10 participants)
3 more projects for comparison (6 participants)
3. Rough structure of a collaboration
PI/Prof
Computational
researcher
Coordinator
Historians Technologists
EngineersPhDs
4. Infrastructures
“[D]igital humanities, where there is an urge to
work with large data sets and to create
accompanying infrastructures for them”
(Rogers 2013, p259)
“Infrastructure” here as the complex of:
• Connected datasets
• Data model
• Algorithms for analysis
• User interface
5. Gains of infrastructure work
Working on infrastructures has two main incentives
(Meyer & Schroeder 2015)
1. Attention for developed system
(Case 1, historian coordinator)
“During our launch event … people were honestly
enthusiastic, they saw also the applicability of the
software within their own organization, or to their
own questions of connecting datasets. So I was very
enthusiastic about that”
“The idea of the project has always been a proof-of-
concept … and … bring attention to our system that
we can try with them to get continued funding”
6. Gains of infrastructure work
Working on infrastructures has two main
incentives (Meyer & Schroeder 2015)
2. Development of expertise
(Case 2, historian PI)
“[Other projects will] profit from our investment,
the knowledge-building, the technical know-how,
that we have developed”
7. Incentives of participants
Do all project participants share the
ambitions of the project as developing
systems and technical expertise?
8. What is the historian working on?
(case 1, historian PI)
“What he must do is to write a thesis of 5 chapters that
are of [theoretical] value… And there he must just write
conventional stories, narratives. And at the same time he
uses digital means in his research.”
(case 1, computational linguist)
“[Y]ou go through a lot of trouble, manual effort and
thinking of how to organize this thing, and what kind of
labels you’re going to put and how you’re going to
structure it. This is not something that is completely
useless, it is also a valuable thing. But you have to tell
him that ... he doesn’t really realize that the digital part of
his research, is also research.”
(case 1, historian PhD)
“All that happens ultimately has the goal that I can do my
research better”
9. What is the technologist working on?
(case 1, computational linguist)
“[T]he project is basically building an interface
where all this information is presented in a ...
user-friendly way, in a good way. That would be
the success of the project. What I’m doing might
provide additional information to this interface
from the original text sources that are not yet
structured, that would be good, but it’s not ...
paramount for the success of the project.”
10. Discrepancy
Scholars are not necessarily concerned with the
infrastructure as thing
Infrastructure As Afterthought
What do we make of this?
• Do participants drift away from the project?
• Are projects not actually about infrastructure?
• Is the concept of “infrastructure” too narrow?
12. Infrastructure as process
Impossible to define the boundaries of
infrastructures (Karasti 2017, Larkin 2013, Star
& Ruhleder 1996)
• Separating not-yet-infrastructure from infrastructure
• Separating one infrastructure from another
• Separating infrastructure from person
In field of CSCW, attention towards
“infrastructure” not as noun but as verb
Infrastructuring
13. Infrastructuring
“Infrastructuring” captures the complex and
continuous process of development and use of
technology (Björgvinsson et al 2010)
• Design
• Coordination
• Development
• Appropriation
• Recontextualization
14. Infrastructuring in DH
• Design
• Infrastructure not yet in place
• Data model
• Coordination
• Collaboration of scholars and computational
researchers or developers
• Development
• Connecting (heterogeneous) datasets
• Tool or VRE
• Appropriation
• Ideally used also by other scholars
• Recontextualization
15. Recontextualization
DH projects can be driven by (Owens 2014)
• Research question
• Data collection
• Technology
Projects go back-and-forth on what the main drive is,
often end up at technology
Instead: constant recontextualization
RQ
TechnologyData
16. Conclusions
Scholars in DH collaborations are not necessarily
concerned with infrastructure as thing
Infrastructuring is a much broader sociotechnical
practice of aligning digital technology with (scholarly)
practices
Digital humanities is concerned with the alignment of
digital technology with scholarship (Kaltenbrunner 2015)
Not as afterthought, but as the very essence
Digital humanities as infrastructuring
17. Further questions
• Would “infrastructuring” work as a definition of DH?
• Does “infrastructuring” exclude practices
traditionally considered DH work, or include
practices traditionally not considered DH?
• Does infrastructuring necessarily lead to
infrastructures?
My PhD:
• How is infrastructuring in digital history
collaborations performed and experienced?
• What are the power roles during infrastructuring?
• Does infrastructuring lead to acculturation?
18. References
Antonijević, S. (2015). Amongst digital humanists: an ethnographic study of
digital knowledge production.
Björgvinsson, E., Ehn, P., & Hillgren, P.-A. (2010). Participatory design and
“democratizing innovation.” In Proceedings of the 11th Biennial Participatory
Design Conference on - PDC ’10 (p. 41-50). New York, New York, USA: ACM
Press.
Kaltenbrunner, W. (2015). Reflexive inertia: reinventing scholarship through
digital practices. Leiden University.
Karasti, H., & Blomberg, J. (2017). Studying Infrastructuring Ethnographically.
Computer Supported Cooperative Work: CSCW: An International Journal, 1–33.
Larkin, B. (2013). The Politics and Poetics of Infrastructure. Annual Review of
Anthropology, 42(1), 327–343.
Meyer, E. T., & Schroeder, R. (2015). Knowledge Machines: Digital
Transformations of the Sciences and Humanities. (Ebook, Ed.). MIT Press.
Owens, T. (2014). Where to Start? On Research Questions in The Digital
Humanities. www.trevorowens.org
Rogers, R. (2013). Digital Methods. MIT Press.
Star, S. L., & Ruhleder, K. (1996). Steps Toward an Ecology of Infrastructure:
Design and Access for Large Information Spaces. Information Systems
Research, 7(1), 111–134.