This presentation outlines five ways to find data on your reporting beat that can be developed into unique stories. It also outlines several data-driven story ideas on three beats: cops and courts, health, and government. And it includes exercises on how to sort in Excel and search for stories in government databases. It was created by Manuel Torres, enterprise editor for The Times-Picayune | Nola.com, for APME's NewsTrain in Monroe, La., on Oct. 15-16, 2015. It is accompanied by two handouts: "Data-Driven Enterprise off Your Beat" and "Help Getting Public Records." NewsTrain is a training initiative of Associated Press Media Editors: http://bit.ly/NewsTrain
GitHub as Transparency Device in Data Journalism, Open Data and Data ActivismLiliana Bounegru
Slides from presentation of research agenda around uses of GitHub in journalism at the Digital Methods Summer School 2015. More details here: http://lilianabounegru.org/2015/07/08/github-as-transparency-device-in-data-journalism-open-data-and-data-activism/
Northeastern Ohio nonprofit innovators met for first annual Big Data for a Better World conference on November 16 at Hyland Software's sprawling Westake, Ohio campus. Leading Hands Through Technology (LHTT) and Workman’s Circle teamed up to offer the event so local nonprofits could discuss how analytics could be successfully used to keep their organization profitable and ultimately improve the community.
Doing Digital Methods: Some Recent Highlights from Winter and Summer SchoolsLiliana Bounegru
Talk given at the Digital Methods Winter School 2017 at the University of Amsterdam. It presents a selection of projects developed at the 2016 Digital Methods Winter and Summer Schools (www.digitalmethods.net).
Doing Social and Political Research in a Digital Age: An Introduction to Digi...Liliana Bounegru
Lecture given at the National Center of Competence in Research: Challenges to Democracy in the 21st Century, 5 November 2015, Zürich University, Zürich, Switzerland
Data Journalism and the Remaking of Data InfrastructuresLiliana Bounegru
Talk given at the “Evidence and the Politics of Policymaking” Conference, University of Bath, 14th September 2016, on the basis of my PhD research at the University of Groningen and University of Ghent.
http://www.bath.ac.uk/ipr/events/news-0230.html.
Mapping Issues with the Web: An Introduction to Digital MethodsJonathan Gray
Slides from talk on "Mapping Issues with the Web: An Introduction to Digital Methods" at Tow Center for Digital Journalism, Columbia University, 23rd September 2014. Further details at: http://jonathangray.org/2014/09/10/mapping-issues-with-web-columbia/
This presentation outlines five ways to find data on your reporting beat that can be developed into unique stories. It also outlines several data-driven story ideas on three beats: cops and courts, health, and government. And it includes exercises on how to sort in Excel and search for stories in government databases. It was created by Manuel Torres, enterprise editor for The Times-Picayune | Nola.com, for APME's NewsTrain in Monroe, La., on Oct. 15-16, 2015. It is accompanied by two handouts: "Data-Driven Enterprise off Your Beat" and "Help Getting Public Records." NewsTrain is a training initiative of Associated Press Media Editors: http://bit.ly/NewsTrain
GitHub as Transparency Device in Data Journalism, Open Data and Data ActivismLiliana Bounegru
Slides from presentation of research agenda around uses of GitHub in journalism at the Digital Methods Summer School 2015. More details here: http://lilianabounegru.org/2015/07/08/github-as-transparency-device-in-data-journalism-open-data-and-data-activism/
Northeastern Ohio nonprofit innovators met for first annual Big Data for a Better World conference on November 16 at Hyland Software's sprawling Westake, Ohio campus. Leading Hands Through Technology (LHTT) and Workman’s Circle teamed up to offer the event so local nonprofits could discuss how analytics could be successfully used to keep their organization profitable and ultimately improve the community.
Doing Digital Methods: Some Recent Highlights from Winter and Summer SchoolsLiliana Bounegru
Talk given at the Digital Methods Winter School 2017 at the University of Amsterdam. It presents a selection of projects developed at the 2016 Digital Methods Winter and Summer Schools (www.digitalmethods.net).
Doing Social and Political Research in a Digital Age: An Introduction to Digi...Liliana Bounegru
Lecture given at the National Center of Competence in Research: Challenges to Democracy in the 21st Century, 5 November 2015, Zürich University, Zürich, Switzerland
Data Journalism and the Remaking of Data InfrastructuresLiliana Bounegru
Talk given at the “Evidence and the Politics of Policymaking” Conference, University of Bath, 14th September 2016, on the basis of my PhD research at the University of Groningen and University of Ghent.
http://www.bath.ac.uk/ipr/events/news-0230.html.
Mapping Issues with the Web: An Introduction to Digital MethodsJonathan Gray
Slides from talk on "Mapping Issues with the Web: An Introduction to Digital Methods" at Tow Center for Digital Journalism, Columbia University, 23rd September 2014. Further details at: http://jonathangray.org/2014/09/10/mapping-issues-with-web-columbia/
Towards Explainable Fact Checking (DIKU Business Club presentation)Isabelle Augenstein
Outline:
- Fact checking – what is it and why do we need it?
- False information online
- Content-based automatic fact checking
- Explainability – what is it and why do we need it?
- Making the right predictions for the right reasons
- Model training pipeline
- Explainable fact checking – some first solutions
- Rationale selection
- Generating free-text explanations
- Wrap-up
We all know students access Google everytime they want to know something. But how accurate is the information that they find? What are the strategies you can employ to make sure you and your students are excellent 'crap detectors' (in the words of Howard Rheingold). This presentation explores just some.
Tutorial on 'Explainability for NLP' given at the first ALPS (Advanced Language Processing) winter school: http://lig-alps.imag.fr/index.php/schedule/
The talk introduces the concepts of 'model understanding' as well as 'decision understanding' and provides examples of approaches from the areas of fact checking and text classification.
Exercises to go with the tutorial are available here: https://github.com/copenlu/ALPS_2021
Journalists today are faced with an overwhelming abundance of data – from large collections of leaked documents, to public databases about lobbying or government spending, to ‘big data’ from social networks such as Twitter and Facebook. To stay relevant to society journalists are learning to process this data and separate signal from noise in order to provide valuable insights to their readers. This talk will address questions like: What is the potential of data journalism? Why is it relevant to society? And how can you get started?
Automatic fact checking is one of the more involved NLP tasks currently researched: not only does it require sentence understanding, but also an understanding of how claims relate to evidence documents and world knowledge. Moreover, there is still no common understanding in the automatic fact checking community of how the subtasks of fact checking — claim check-worthiness detection, evidence retrieval, veracity prediction — should be framed. This is partly owing to the complexity of the task, despite efforts to formalise the task of fact checking through the development of benchmark datasets.
The first part of the talk will be on automatically generating textual explanations for fact checking, thereby exposing some of the reasoning processes these models follow. The second part of the talk will be on re-examining how claim check-worthiness is defined, and how check-worthy claims can be detected; followed by how to automatically generate claims which are hard to fact-check automatically.
What Actor-Network Theory (ANT) and digital methods can do for data journalis...Liliana Bounegru
Slides from a talk I gave at the University of Ghent on 21 October 2014 about how Actor-Network Theory (ANT) and digital methods can be used to study and inform data journalism.
Boutique Big Data: Understanding 19th-Century Reprint Culture With Plagiarism...M. H Beals
From their earliest incarnations in the seventeenth-century, through their Georgian expansion into provincial and colonial markets and culminating in their late-Victorian transformation into New Journalism, British newspapers have relied upon scissors-and-paste journalism to meet consumer demands for the latest political intelligence and diverting content. Although this practice, wherein one newspaper extracted or wholly duplicated content from another, is well known to scholars of the periodical press, in-depth analysis of the process is hindered by the lack of formal records relating to the reprinting process. Although anecdotes abound, attributions were rarely and inconsistently given and, with no legal requirement to recompense the original author, formal records of where material was obtained were unnecessary. Even if they had existed, the number of titles that relied upon reprinted material makes systematic analysis impossible; for many periodicals, only a few issues, let alone business records, survive. However, mass digitisation of these periodicals, in both photographic and machine-readable form, offers historians a new opportunity to rediscover the mechanics of nineteenth-century reprinting. By undertaking multi-modal and multi-scale analyses of digitised periodicals, we can begin to reconstruct the precise journeys these texts took from their first appearance to their multiple ends. Moreover, by repurposing individual ‘boutique’ research outputs within large-scale textual analyses, we can greatly enhance the resolution of our computer-aided conclusions and bridge the gaps between commercial, state and private databases.
This paper will explore the possibilities of large-scale reprint identification, using out-of-the-box and project-specific software and the nature of multi-scale analysis and how we might best reintegrate ‘boutique’ research into large-scale text-mining projects.
Evaluating Real World Information (NJLA 2018)Megan Dempsey
Presented at the 2018 New Jersey Library Association Annual Conference. Discusses examples of misinformation and distorted information found online and a method for thinking critically about the information we encounter.
COMPLETE GUIDE ON WRITING A STELLAR RESEARCH PAPER ON CRIMINAL BEHAVIORLauren Bradshaw
How to get ready for a research paper on criminal behavior? Which topic to choose? How should a thesis statement sound? Find answers to all these questions in our guide.
AI and Social Justice: From Avoiding Harms to Positive ActionAlan Dix
Talk at The AI Summit New York, 8th Dec. 2021.
https://www.alandix.com/academic/talks/AI-Summit-NY-2021-AISJ/
AI and in particular large data machine learning are transforming many areas of society including healthcare, education, and finance. At their best these offer the potential to improve society, for example, finding new pharmaceuticals. However, they may also reproduce or reinforce existing divisions and inequalities as well as creating new problems. This has been evident in high-profile news items such as the case of racial discrimination in facial recognition systems used in policing. However, some of the deepest problems in unequal access to technology are still to be fully felt.
Fortunately, AI can also be used positively to address issues of social injustice, for example human-rights organisations scanning public domain images for evidence of abuse, or software using adversarial techniques to reduce bias in training data. At best some companies and institutions are addressing these issues proactively, seeking ways to ensure they prevent or detect problems before they happen, for others this may be a rear-guard action to fix problems that have already emerged.
In this talk we will present a high-level landscape of the ways on which AI interacts with social justice and illustrate this through examples so that we can take positive action for a fairer world.
Towards Explainable Fact Checking (DIKU Business Club presentation)Isabelle Augenstein
Outline:
- Fact checking – what is it and why do we need it?
- False information online
- Content-based automatic fact checking
- Explainability – what is it and why do we need it?
- Making the right predictions for the right reasons
- Model training pipeline
- Explainable fact checking – some first solutions
- Rationale selection
- Generating free-text explanations
- Wrap-up
We all know students access Google everytime they want to know something. But how accurate is the information that they find? What are the strategies you can employ to make sure you and your students are excellent 'crap detectors' (in the words of Howard Rheingold). This presentation explores just some.
Tutorial on 'Explainability for NLP' given at the first ALPS (Advanced Language Processing) winter school: http://lig-alps.imag.fr/index.php/schedule/
The talk introduces the concepts of 'model understanding' as well as 'decision understanding' and provides examples of approaches from the areas of fact checking and text classification.
Exercises to go with the tutorial are available here: https://github.com/copenlu/ALPS_2021
Journalists today are faced with an overwhelming abundance of data – from large collections of leaked documents, to public databases about lobbying or government spending, to ‘big data’ from social networks such as Twitter and Facebook. To stay relevant to society journalists are learning to process this data and separate signal from noise in order to provide valuable insights to their readers. This talk will address questions like: What is the potential of data journalism? Why is it relevant to society? And how can you get started?
Automatic fact checking is one of the more involved NLP tasks currently researched: not only does it require sentence understanding, but also an understanding of how claims relate to evidence documents and world knowledge. Moreover, there is still no common understanding in the automatic fact checking community of how the subtasks of fact checking — claim check-worthiness detection, evidence retrieval, veracity prediction — should be framed. This is partly owing to the complexity of the task, despite efforts to formalise the task of fact checking through the development of benchmark datasets.
The first part of the talk will be on automatically generating textual explanations for fact checking, thereby exposing some of the reasoning processes these models follow. The second part of the talk will be on re-examining how claim check-worthiness is defined, and how check-worthy claims can be detected; followed by how to automatically generate claims which are hard to fact-check automatically.
What Actor-Network Theory (ANT) and digital methods can do for data journalis...Liliana Bounegru
Slides from a talk I gave at the University of Ghent on 21 October 2014 about how Actor-Network Theory (ANT) and digital methods can be used to study and inform data journalism.
Boutique Big Data: Understanding 19th-Century Reprint Culture With Plagiarism...M. H Beals
From their earliest incarnations in the seventeenth-century, through their Georgian expansion into provincial and colonial markets and culminating in their late-Victorian transformation into New Journalism, British newspapers have relied upon scissors-and-paste journalism to meet consumer demands for the latest political intelligence and diverting content. Although this practice, wherein one newspaper extracted or wholly duplicated content from another, is well known to scholars of the periodical press, in-depth analysis of the process is hindered by the lack of formal records relating to the reprinting process. Although anecdotes abound, attributions were rarely and inconsistently given and, with no legal requirement to recompense the original author, formal records of where material was obtained were unnecessary. Even if they had existed, the number of titles that relied upon reprinted material makes systematic analysis impossible; for many periodicals, only a few issues, let alone business records, survive. However, mass digitisation of these periodicals, in both photographic and machine-readable form, offers historians a new opportunity to rediscover the mechanics of nineteenth-century reprinting. By undertaking multi-modal and multi-scale analyses of digitised periodicals, we can begin to reconstruct the precise journeys these texts took from their first appearance to their multiple ends. Moreover, by repurposing individual ‘boutique’ research outputs within large-scale textual analyses, we can greatly enhance the resolution of our computer-aided conclusions and bridge the gaps between commercial, state and private databases.
This paper will explore the possibilities of large-scale reprint identification, using out-of-the-box and project-specific software and the nature of multi-scale analysis and how we might best reintegrate ‘boutique’ research into large-scale text-mining projects.
Evaluating Real World Information (NJLA 2018)Megan Dempsey
Presented at the 2018 New Jersey Library Association Annual Conference. Discusses examples of misinformation and distorted information found online and a method for thinking critically about the information we encounter.
COMPLETE GUIDE ON WRITING A STELLAR RESEARCH PAPER ON CRIMINAL BEHAVIORLauren Bradshaw
How to get ready for a research paper on criminal behavior? Which topic to choose? How should a thesis statement sound? Find answers to all these questions in our guide.
AI and Social Justice: From Avoiding Harms to Positive ActionAlan Dix
Talk at The AI Summit New York, 8th Dec. 2021.
https://www.alandix.com/academic/talks/AI-Summit-NY-2021-AISJ/
AI and in particular large data machine learning are transforming many areas of society including healthcare, education, and finance. At their best these offer the potential to improve society, for example, finding new pharmaceuticals. However, they may also reproduce or reinforce existing divisions and inequalities as well as creating new problems. This has been evident in high-profile news items such as the case of racial discrimination in facial recognition systems used in policing. However, some of the deepest problems in unequal access to technology are still to be fully felt.
Fortunately, AI can also be used positively to address issues of social injustice, for example human-rights organisations scanning public domain images for evidence of abuse, or software using adversarial techniques to reduce bias in training data. At best some companies and institutions are addressing these issues proactively, seeking ways to ensure they prevent or detect problems before they happen, for others this may be a rear-guard action to fix problems that have already emerged.
In this talk we will present a high-level landscape of the ways on which AI interacts with social justice and illustrate this through examples so that we can take positive action for a fairer world.
Doug Caruso, assistant metro editor at The Columbus Dispatch, prepared this presentation on producing data-driven enterprise stories off your beat for Columbus, Ohio, NewsTrain on Oct. 21, 2017. It is accompanied by a handout of the same title. NewsTrain is a training initiative of Associated Press Media Editors (APME). More info: http://bit.ly/NewsTrain
Série de webinaires sur le gouvernement ouvert du Canada
L'équipe du #GouvOuvert est de retour avec un nouveau webinaire le 28 novembre! Nous allons discuter au sujet des #coulisses des #donnéesouvertes au avec la professeure
@TraceyLauriault
de
@Carleton_U
et
@JaimieBoyd
. Inscrivez-vous maintenant: http://ow.ly/UQvu50xabIb
Week 13 (Apr. 8) – Assemblages, Genealogies and Dynamic Nominalism
Course description:
The emphasis is to learn to envision data genealogically, as a social and technical assemblages, as infrastructure and reframe them beyond technological conceptions. During the term we will explore data, facts and truth; the power of data both big and small; governmentality and biopolitics; risk, probability and the taming of chance; algorithmic culture, dynamic nominalism, categorization and ontologies; the translation of people, space and social phenomena into and by data and software and the role of data in the production of knowledge.
This class format is a graduate MA seminar and a collaborative workshop. We will work with Ottawa Police Services and critically examine the socio-technological data assemblage of that institution. This includes a fieldtrip to the Elgin street station; a tour of the 911 Communication Centre and we will meet with data experts.
April 4, 2019, 17:30-19:30
IOG's Policy Crunch
Disruptive Innovation and Public Policy in the Digital Age event series
The Global Race in Digital Governance
https://iog.ca/events/the-global-race-in-digital-governance/
March 25, 2019, 9:30 AM
International Meeting of NAICS code Experts
Statistics Canada
Simon Goldberg Room, RH Coats building
100 Tunney’s Pasture Driveway
With research contributions by Ben Wright, Carleton University and Dustin Moores, University of Ottawa
Presented at the:
Canadian Aviation Safety Collaboration Forum
International Civil Aviation Organization (ICAO)
Montreal, QC
January 23, 2019
This presentation was made in real-time while attending the Forum. The objective was to observe and listen, and share some examples outside of this community that may provide insight about data sharing models with a focus on governance.
From Aspiration to Reality: Open Smart Cities
Open smart cities might become a reality for Canada. Globally there are a number of initiatives, programs, and practices that are open smart city like which means that it is possible to have an open, responsive and engaged city that is both socio-technologically enabled, but also one where there is receptivity to and a willingness to grow a critically informed type of technological citizenship (Feenberg). For an open smart city to exist, public officials, the private sector, scholars, civil society and residents and citizens require a definition and a guide to start the exercise of imagining what an open smart city might look like. There is much critical scholarship about the smart city and there are many counter smart city narratives, but there are few depictions of what engagement, participatory design and technological leadership might be. The few examples that do exist are project based and few are systemic. An open smart city definition and guide was therefore created by a group of stakeholders in such a way that it can be used as the basis for the design of an open smart city from the ground up, or to help actors shape or steer the course of emerging or ongoing data and networked urbanist forms (Kitchin) of smart cities to lead them towards being open, engaged and receptive to technological citizenship.
This talk will discuss some of the successes resulting from this Open Smart Cities work, which might also be called a form or engaged scholarship. For example the language for the call for tender of the Infrastructure Canada Smart City Challenge was modified to include as a requisite that engagement and openness be part of the submissions from communities. Also, those involved with the guide have been writing policy articles that critique either AI or the smart city while also offering examples of what is possible. These articles are being read by proponents of Sidewalk Labs in Toronto. Also, the global Open Data Conference held in Argentina in September of 2018 hosted a full workshop on Open Smart Cities and finally Open North is working toward developing key performance indicators to assess those shortlisted by Infrastructure Canada and to help those communities develop an Open Smart Cities submission. The objective of the talk is to demonstrate that it is actually possible to shift public policy on large infrastructure projects, at least, in the short term.
This week we will learn about user generated content (UGC), citizen science, crowdsourcing & volunteered geographic information (VGI). We will also discuss divergent views on data humanitarianism.
Cottbus Brandenburg University of Technology Lecture series on Smart RegionsCritically Assembling Data, Processes & Things: Toward and Open Smart CityJune 5, 2018
This lecture will critically focus on smart cities from a data based socio-technological assemblage approach. It is a theoretical and methodological framework that allows for an empirical examination of how smart cities are socially and technically constructed, and to study them as discursive regimes and as a large technological infrastructural systems.
The lecture will refer to the research outcomes of the ERC funded Programmable City Project led by Rob Kitchin at Maynooth University and will feature examples of empirical research conducted in Dublin and other Irish cities.
In addition, the lecture will discuss the research outcomes of the Canadian Open Smart Cities project funded by the Government of Canada GeoConnections Program. Examples will be drawn from five case studies namely about the cities of Edmonton, Guelph, Ottawa and Montreal, and the Ontario Smart Grid as well as number of international best practices. The recent Infrastructure Canada Canadian Smart City Challenge and the controversial Sidewalk Lab Waterfront Toronto project will also be discussed.
It will be argued that no two smart cities are alike although the technological solutionist and networked urbanist approaches dominate and it is suggested that these kind of smart cities may not live up to the promise of being better places to live.
In this lecture, the ideals of an Open Smart City are offered instead and in this kind of city residents, civil society, academics, and the private sector collaborate with public officials to mobilize data and technologies when warranted in an ethical, accountable and transparent way in order to govern the city as a fair, viable and livable commons that balances economic development, social progress and environmental responsibility. Although an Open Smart City does not yet exist, it will be argued that it is possible.
Conference of Irish Geographies 2018
The Earth as Our Home
Automating Homelessness May 12, 2018
The research for these studies is funded by a European Research Council Advanced Investigator award ERC-2012-AdG-323636-SOFTCITY.
Presentation #2:Open/Big Urban DataLessons Learned from the Programmable City ProjectMansion House, Dublin, May 9th, 201810am-2pmhttp://progcity.maynoothuniversity.ie/2018/03/lessons-for-smart-cities-from-the-programmable-city-project/
Financé par : GéoConnexions
Dirigé par : Nord Ouvert
Le noyau de l’équipe :
Rachel Bloom et Jean-Noé Landry, Nord Ouvert
Dr Tracey P. Lauriault, Carleton University
David Fewer, Clinique d’intérêt public et de politique d’Internet du Canada (CIPPIC)
Dr Mark Fox, University of Toronto
Assistant et assistante de recherche, Carleton University
Carly Livingstone
Stephen Letts
Open Smart City in Canada Project
Funded by: GeoConnections
Lead by: OpenNorth
Project core team:
Rachel Bloom & Jean-Noe Landry, Open North
Dr. Tracey P. Lauriault, Carleton University
David Fewer, LL.M., Canadian Internet Policy and Public Interest Clinic (CIPPIC)
Dr. Mark Fox, University of Toronto
Research Assistants Carleton University
Carly Livingstone
Stephen Letts
Introductory remarks
- Jean-Noe Landry, Executive Director, Open North
Webinar 2 includes:
- Summary of Webinar 1: E-Scan and Assessment of Smart -
Cities in Canada (listen at: http://bit.ly/2yp7H8k )
- Situating smart cities amongst current digital practices
- Towards guiding principles for Open Smart Cities
- Examples of international best practices from international cities
- Observations & Next Steps
Webinar Presenters:
- Rachel Bloom, Open North
- Dr Tracey P. Lauriault, School of Journalism and Communication, Carleton University
Content Contributors:
- David Fewer CIPPIC,
- Mark Fox U. of Toronto,
- Stephen Letts (RA Carleton U.)
Project Name:
- Open Smart Cities in Canada
Date:
- December 14, 2017
Canada is a data and technological society. There is no sector that is uninformed by data or unmediated by code, algorithms, software and infrastructure. Consider the Internet of Things (IoT), smart cities, and precision agriculture; or smart fisheries, forestry, and energy and of course governing. In a data based and technological society, leadership is the responsibility of all citizens, a parent, teacher, scholar, administrator, public servant, nurse and doctor, mayor and councillor, fisher, builder, business person, industrialist, MP, MLA, PM, and so on. In other words leadership is distributed and requires people power. This form of citizenship, according to Andrew Feenberg, Canada Research Chair in Philosophy of Technology, requires agency, knowledge and the capacity to act or power. In this GovMaker Keynote I will introduce the concept of technological citizenship, I will discuss what principled public interest governing might look like, and how we might go about critically applying philosophy in our daily practice. In terms of practice I will discuss innovative policy and regulation such as the right to repair movement, EU legislation such as the right to explanation, data subjects and the right to access and also data sovereignty from a globalization and an indigenous perspective.
AoIR 2017
Panel 17 Dorpat-Ewers, Tartu 9-10:30AM
Data Driven Ontology Practices
The Real world objects of Ordnance Survey Ireland
Abstract is available here: https://www.conftool.com/aoir2017/index.php?page=browseSessions&form_session=258&presentations=show
More from Communication and Media Studies, Carleton University (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
1. Week 12, April 3, 2019
Media, Gender, and Sexuality
COMS 4604 A
Wednesdays, 2:35 am-5:25 pm
Mackenzie Bldg. 3190
Counting Women
Dr. Tracey P. Lauriault
Assistant Professor, Critical Media and Big Data
Carleton University
Tracey.Lauriault@Carleton.ca
orcid.org/0000-0003-1847-2738
@TraceyLauriault
23. When things are known actions can be taken
▪Obesity was considered a moral defect, biology,
research/science and the political economy of demographics
and locales have been shown to be factors associated with
obesity, it is now a social issue
▪Homosexuals were deviants and Genetics/science demonstrated
a biological predisposition, changing the moral argument
▪Air pollution leads to poor health, smog control & catalyzers
▪Unfortunately junk science can also lead to action! Creative
class, gay facial recognition, biased data or corpus, etc.
▪METHODOLOGY & CRITICAL THINKING
35. Dynamic Nominalism
Modified from Ian Hacking’s Dynamic Nominalism
Tracey P. Lauriault, 2012, Data, Infrastructures and Geographical Imaginations
36. Material Platform
(infrastructure – hardware)
Code Platform
(operating system)
Code/algorithms
(software)
Data(base)
Interface
Reception/Operation
(user/usage)
Systems of thought
Forms of knowledge
Finance
Political economies
Governmentalities - legalities
Organisations and institutions
Subjectivities and communities
Marketplace
System/process
performs a task
Context
frames the system/task
Digital socio-technical assemblage
HCI, Remediation studies
Critical code studies
Software studies
New media studies
Game studies
Theoretical approaches
Platform studies
Places
Practices
Flowline/Lifecycle
Surveillance Studies
Critical data studies
Algorithm Studies
Socio-Technological Assemblage
Modified by Lauriault from Kitchin, 2014, The Data Revolution, Sage.
38. Femicide Definition
Definition
“the killing of females
by males because they
are females.”
PATH, InterCambios, MRC, WHO
(2009) Strengthening
Understandings of Femicide: Using
Research to Galvanize Action and
Accountability. Washington DC.
39. • The Home Office now records and publishes data on homicide victims and the
relationship of the victim to the principal suspect and sex the of the victim.
• But it does not have the sex of the killer or connect different forms of male violence
against women.
Official Statistics
42. Why the Femicide Census?
1. Provide a clearer picture of domestic homicides in the UK by age/ethnic
origin/ relationship/ profession/region/outcome;
2. Provide a clearer picture of men’s fatal violence against women that is not
committed by a partner or ex-partner;
3. Information to create advocacy tools to provide concrete data on
domestic violence homicides;
4. Provide data when NGOs working to end domestic violence against
women is providing expert evidence on domestic homicides in civil
cases or before the Coroners court;
5. Provide comparisons and parallels between cases to identify where there
is the potential for a systemic argument against the State for failing to
protect the Right to Life; and
6. Provide a resource for academics researching femicides
43. Shelving Justice
▪ Action and Participatory Research project
▪ Qualitative interviews
▪ Archival records
▪ Ethnographic observations
▪ 1. Context about police lethality, excessive
force, neglect and mistrust in Detroit
▪ 2. Detailed description of how data are
collected from victims and how these move
through the system
▪ 3. List of state where there is a backlog of
untested rapekits
▪ 4. Discussed the failure:
▪ Serial rapists
▪ Exonerate the falsely accused
▪ Breach of trust
▪ Justice Denied
▪ 5. Research Questions
▪ Did the police know there were a large
number of untested kits in storage?
▪ Where they aware they had a problem?
▪ If not, how did they not know?
▪ If so, why did they not see this as a
problem?
6. Data
▪ Criteria to establish trustworthiness
▪ Credibility
▪ Transferability
▪ Dependability
▪ Confirmability
7. Described the problem
▪ Using their data
▪ Quotes from interviews
▪ Documentary evidence
(Campbell - 2015)
44. Shelving Justice
▪ Indicator – Crime lab closed due
to high error rate
▪ Action – Needed to examine
police evidence storage
▪ Accidental discovery of the kits
▪ The record keeping system
▪ did not flag this issue
▪ Evidence is logged and tagged
▪ Distributed storage
▪ When issue was flagged no action
was taken – Letters from the
Prosecutor’s office to Chief of Police
▪ Semantic wars – discovery, Numbers
Debate
▪ IA investigation;
▪ ‘random’ test of 36 kits + police reports
▪ Justifiable reasons?
▪ Victim’s fault were the reasons
▪ Perception of credibility
▪ Assumption that prostitutes cannot be
raped
▪ Complainant refused to prosecute
▪ ‘got what they got’
▪ ‘the assaults were not really rape’
(Campbell - 2015)
45. Shelving Justice = Justice Denied
▪The failure to test meant that:
▪Serial rapists continue to rape
▪The falsely accused are not exonerated
▪There is a breach of trust
▪ It is Justice Denied
51. What matters?
▪Counting
▪Qualitative and quantitative data
▪Making things visible
▪Who & what & where
▪The way to count (methodology)
▪Accuracy, reliability, bias, objectivity, quality, completeness
▪Representation
▪Science and critical thinking
▪The story
▪Making the data work