In a research context, artificially increasing image resolution through interpolation of additional pixels would generally not be considered appropriate, as it introduces artificial data that was not present in the original image. Minor sharpening could potentially be acceptable to improve clarity, but adding whole new pixels risks misleading viewers about what is actually depicted in the original data. The best approach is to capture images at a sufficiently high resolution during data collection.
This document summarizes a survey of over 300 contributions to the field of content-based image retrieval (CBIR) from 2000-2008. It finds that interest and publications in CBIR have grown exponentially in this period, as the field has expanded to include techniques from computer vision, machine learning, and other areas. The survey reviews key approaches to CBIR, including addressing the real-world challenges of building useful image retrieval systems. It also examines related areas like evaluation of CBIR systems and applications to problems like image annotation.
A Pragmatic Perspective on Software VisualizationArie van Deursen
Slides of the keynote presentation at the 5th International IEEE/ACM Symposium on Software Visualization, SoftVis 2010. Salt Lake City, USA, October 2010.
This document provides an overview of Brian Fisher's background and research in visual analytics as a cognitive science. Some key points:
- Fisher has a background in experimental psychology and cognitive science and does research at the intersection of visualization, human-computer interaction, and cognitive science.
- He discusses the challenges of analyzing "big data" and how visual analytics can help by drawing on theories from cognitive science. Visual analytics needs to be built on theories of cognition, perception, and interaction.
- Fisher advocates for visual analytics to become a translational cognitive science by bridging fields like informatics, visualization, and psychology through collaborative work and shared research questions. His approach involves starting collaborative projects in the intersection of these fields.
This document provides an overview of Brian Fisher's background and research in visual analytics as a cognitive science. Some key points:
- Fisher has a background in experimental psychology and cognitive science and does research at the intersection of visualization, human-computer interaction, and cognitive science.
- He discusses the challenges of analyzing "big data" and how visual analytics can help by drawing on theories from cognitive science. Visual analytics needs to be built on theories of cognition, perception, and interaction.
- Fisher advocates for visual analytics to become a translational cognitive science by bridging fields like informatics, visualization, and psychology through collaborative work and shared research questions. His approach involves starting collaborative projects in the intersection of these fields.
This document discusses visualization for software analytics and identifies three key trends: 1) developers moving from solo coders to social coders, 2) software development shifting from code-centric to data-centric, and 3) visualization becoming ubiquitous rather than standalone. It provides examples of visualizations for software design, code, dynamic behavior, architecture, and human activities. It discusses how visualization can provide insights, support tasks, and communicate knowledge. It also outlines opportunities and challenges for visual analytics and ubiquitous visualization in software engineering.
User modeling involves creating explicit or implicit models of users to tailor systems to individual needs. The document discusses the history, purposes, techniques and challenges of user modeling. Early work in the 1970s and 1980s focused on developing user modeling shells and frameworks. More recent developments include using machine learning, emotions, and preferences in user models. Overall, user modeling aims to personalize systems but faces challenges in accurately inferring user attributes.
The document discusses designing user experiences for people with cognitive disabilities. It notes that 7% of the US population has some type of cognitive impairment. It outlines common types of cognitive disabilities like learning disabilities, attention disorders, traumatic brain injuries, and those related to aging. The document discusses challenges people with cognitive disabilities face when using technology, like difficulty finding features, recovering from errors, saving work. It also reviews the state of accessibility research, which has focused less on cognitive disabilities. The document proposes approaches like universal design, assistive technologies, and usability testing to help make technologies more accessible and usable for those with cognitive impairments.
The document discusses the human-centered design approach to data as a service. It emphasizes engaging with communities to understand local contexts and involving stakeholders throughout the research process. The presentation outlines steps for responsible research, including obtaining ethics approval, engaging gatekeepers, sensitizing researchers to cultural practices, and documenting engagement activities. It also discusses challenges around community research fatigue and ensuring information meets recipient needs in terms of being the right information, at the right time, for the right purpose.
This document summarizes a survey of over 300 contributions to the field of content-based image retrieval (CBIR) from 2000-2008. It finds that interest and publications in CBIR have grown exponentially in this period, as the field has expanded to include techniques from computer vision, machine learning, and other areas. The survey reviews key approaches to CBIR, including addressing the real-world challenges of building useful image retrieval systems. It also examines related areas like evaluation of CBIR systems and applications to problems like image annotation.
A Pragmatic Perspective on Software VisualizationArie van Deursen
Slides of the keynote presentation at the 5th International IEEE/ACM Symposium on Software Visualization, SoftVis 2010. Salt Lake City, USA, October 2010.
This document provides an overview of Brian Fisher's background and research in visual analytics as a cognitive science. Some key points:
- Fisher has a background in experimental psychology and cognitive science and does research at the intersection of visualization, human-computer interaction, and cognitive science.
- He discusses the challenges of analyzing "big data" and how visual analytics can help by drawing on theories from cognitive science. Visual analytics needs to be built on theories of cognition, perception, and interaction.
- Fisher advocates for visual analytics to become a translational cognitive science by bridging fields like informatics, visualization, and psychology through collaborative work and shared research questions. His approach involves starting collaborative projects in the intersection of these fields.
This document provides an overview of Brian Fisher's background and research in visual analytics as a cognitive science. Some key points:
- Fisher has a background in experimental psychology and cognitive science and does research at the intersection of visualization, human-computer interaction, and cognitive science.
- He discusses the challenges of analyzing "big data" and how visual analytics can help by drawing on theories from cognitive science. Visual analytics needs to be built on theories of cognition, perception, and interaction.
- Fisher advocates for visual analytics to become a translational cognitive science by bridging fields like informatics, visualization, and psychology through collaborative work and shared research questions. His approach involves starting collaborative projects in the intersection of these fields.
This document discusses visualization for software analytics and identifies three key trends: 1) developers moving from solo coders to social coders, 2) software development shifting from code-centric to data-centric, and 3) visualization becoming ubiquitous rather than standalone. It provides examples of visualizations for software design, code, dynamic behavior, architecture, and human activities. It discusses how visualization can provide insights, support tasks, and communicate knowledge. It also outlines opportunities and challenges for visual analytics and ubiquitous visualization in software engineering.
User modeling involves creating explicit or implicit models of users to tailor systems to individual needs. The document discusses the history, purposes, techniques and challenges of user modeling. Early work in the 1970s and 1980s focused on developing user modeling shells and frameworks. More recent developments include using machine learning, emotions, and preferences in user models. Overall, user modeling aims to personalize systems but faces challenges in accurately inferring user attributes.
The document discusses designing user experiences for people with cognitive disabilities. It notes that 7% of the US population has some type of cognitive impairment. It outlines common types of cognitive disabilities like learning disabilities, attention disorders, traumatic brain injuries, and those related to aging. The document discusses challenges people with cognitive disabilities face when using technology, like difficulty finding features, recovering from errors, saving work. It also reviews the state of accessibility research, which has focused less on cognitive disabilities. The document proposes approaches like universal design, assistive technologies, and usability testing to help make technologies more accessible and usable for those with cognitive impairments.
The document discusses the human-centered design approach to data as a service. It emphasizes engaging with communities to understand local contexts and involving stakeholders throughout the research process. The presentation outlines steps for responsible research, including obtaining ethics approval, engaging gatekeepers, sensitizing researchers to cultural practices, and documenting engagement activities. It also discusses challenges around community research fatigue and ensuring information meets recipient needs in terms of being the right information, at the right time, for the right purpose.
Big Data for International DevelopmentAlex Rascanu
Alex Rascanu delivered the "Big Data for International Development" presentation at the International Development Conference that took place on February 7, 2015 at University of Toronto Scarborough.
1) The document defines AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies, communicate effectively with AI, and use AI as a tool.
2) It proposes 15 competencies across 5 themes - what AI is, what it can do, how it works, how it should be used, and how people perceive it.
3) The competencies focus on understanding intelligence, different types of AI, their strengths/weaknesses, how machine learning and data work, ethics, and interpreting AI systems.
The document provides an overview of an event on emerging trends in data science given by Dr. Joanne Luciano. It discusses the data science workflow and various processes involved. Some key trends highlighted include increased use of AI and machine learning in data management and reporting, growth of natural language processing, advances in deep learning, emphasis on data privacy and ethics. The document also promotes the new minor in data science offered at University of the Virgin Islands, covering required courses and examples of course sequences for different disciplines.
The document provides an overview of data science and what it entails. It discusses the hype around big data and data science, and how data science has evolved due to improvements in technology that allow for large-scale data processing. It defines data science as a process that involves collecting, cleaning, analyzing and extracting meaningful insights from data. Data scientists come from a variety of academic backgrounds and work in both industry and academia developing solutions to real-world problems using data-driven approaches.
Getting started in Data Science (April 2017, Los Angeles)Thinkful
The document discusses the rise of data science and the skills needed for data scientists. It defines data science as the intersection of engineering, statistics, and communication. Data scientists analyze large datasets to answer important business questions. The document uses LinkedIn in 2006 as a case study, outlining how a data scientist there framed questions, collected and processed user data, explored patterns, and communicated results to improve the user experience and growth. It highlights tools like SQL, analytics software, and machine learning that data scientists use and stresses the importance of curiosity, technical skills, and strong communication for those interested in the field.
Algorithmic bias is a complex, ill-defined concept with many facets. It refers broadly to unjust outcomes from algorithms that aim to predict outcomes based on historical data, but the term is unclear and conflates different types, sources, and impacts of biases. Algorithmic bias arises from multiple factors, including the data used to train models (which reflects societal biases), how data is collected and organized over time in complex data pipelines (forgettance), and how predictions are interpreted and used. Understanding algorithmic bias requires examining algorithms as social and epistemological constructs that reflect and can exacerbate existing inequalities in how knowledge is defined and groups are differentiated through probabilistic analysis.
Working with data is a challenge for many organizations. Nonprofits in particular may need to collect and analyze sensitive, incomplete, and/or biased historical data about people. In this talk, Dr. Cori Faklaris of UNC Charlotte provides an overview of current AI capabilities and weaknesses to consider when integrating current AI technologies into the data workflow. The talk is organized around three takeaways: (1) For better or sometimes worse, AI provides you with “infinite interns.” (2) Give people permission & guardrails to learn what works with these “interns” and what doesn’t. (3) Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation.
intro to data science Clustering and visualization of data science subfields ...jybufgofasfbkpoovh
This document provides an introduction to the field of data science. It defines data science as an interdisciplinary field that uses scientific methods and processes to extract knowledge and insights from large amounts of structured and unstructured data. The document discusses what data science is, why it has grown in importance recently due to massive data collection and computing power, and what skills and roles are involved in data science work. It also presents models of the data science process and team composition.
This is the presentation of the Juan Cruz-Benito’s PhD “On data-driven systems analyzing, supporting and enhancing users’ interaction and experience” that was defended on September 3rd, 2018 in the Faculty of Sciences at University of Salamanca Spain. This PhD was graded with the maximum qualification “Sobresaliente Cum Laude”.
This document provides an overview of data science including its importance, what data scientists do, how the field has emerged, and how to become a data scientist. It notes that by 2018 the US could face shortages of people with data analytics skills. It then discusses how LinkedIn's early growth in 2006 exemplifies the data science process of framing questions, collecting and processing data, exploring patterns, and communicating results. Finally, it outlines the tools used in data science like SQL, analytics software, and machine learning and discusses getting started in the field through education, curiosity, and ongoing learning with mentorship support.
This document provides an overview of data science including its importance, what data scientists do, how the field has emerged, and how to become a data scientist. It discusses how data science can help answer important business questions using LinkedIn in 2006 as a case study. It also outlines the typical data science process of framing questions, collecting and cleaning data, exploring patterns, and communicating results. Finally, it introduces some common data science tools like SQL, analytics software, and machine learning algorithms and discusses options for continuing education in data science.
Guest presentation: SASUF Symposium: Digital Technologies, Big Data, and Cybersecurity, Vaal University of Technology, Vanderbijlpark, South Africa, 15 May 2018
University Public Driven Applications - Big Data and Organizational Design maria chiara pettenati
This document discusses improving access to and use of big data for university and public applications. It summarizes the discussions of a working group on this topic. The group examined current approaches to big data, potential future applications, and challenges. Recommendations focus on developing interdisciplinary education programs to train experts, providing open access to large datasets, and establishing frameworks and standards to support big data analysis. The goal is to leverage big data for addressing societal problems in areas like healthcare, transportation and the environment.
Data Science: Origins, Methods, Challenges and the future?Cagatay Turkay
Slides for my talk at City Unrulyversity on 18.03.15 in London. Discuss the term Data Science, touch upon the origins and the data scientist types. A longer discussion on the Data Science process and challenges analysts face.
And here is the abstract of the talk:
Data Science ... the term is everywhere now, on the news, recruitment sites, technology boards. "Data scientist" is even named to be sexiest job title of the century. But what is it, really? Is it just a hype or a term that will be with us for some time?
This session will investigate where the term is originating from and how it relates to decades of research in established fields such as statistics, data mining, visualisation and machine learning. We will investigate how the field is evolving with the emergence of large, heterogeneous data resources. We will discuss the objectives, tools and challenges of data science as a practice, and look at examples from research and industrial applications.
The document reports on a project to develop data visualization techniques for cancer genomics. It was submitted in partial fulfillment of a Bachelor of Design degree. The project was supervised by Dr. Prasad Bokil in the Department of Design at IIT Guwahati from July 2014 to November 2014. The project aimed to address challenges in visualizing large and complex genomic cancer datasets by exploring new visualization methods and prototypes.
This document discusses the importance of data fluency skills in the 21st century. It defines key terms like data science, machine learning, data literacy, and statistical literacy. While these fields require extensive training, the document argues that domain expertise combined with basic data analysis skills can solve many problems. These basic skills include understanding data structures, using programming to interact with data, and exploratory data analysis through visualization. The data analysis process involves defining problems, collecting and preparing data, visualization and modeling, and communicating results. RStudio is presented as a tool that can support the entire data analysis process within a single integrated development environment.
Long nonfiction chapters are not in-style and may never have been. Where average chapter lengths of nonfiction book chapters are about 4,000 – 7,000 words in length, some may be several times that max range number. The explanation is that there is some irreducible complexity that that chapter addresses that cannot be addressed in shorter form. This slideshow explores some methods for writing longer chapters while still maintaining coherence, focus, and reader interest…and while using some technological tools to write and edit more efficiently.
Overcoming Reluctance to Pursuing Grant Funds in AcademiaShalin Hai-Jew
Starting as an organization’s new grant writer can be a challenge, especially in a case where there has been a time lapse since the last one left. People get out of the habit of pursuing grant funds. This slideshow addresses some of the reasons for such reluctance and proposes some ways to mitigate these.
More Related Content
Similar to Editing Digital Imagery in Research: Exploring the Fidelity-to-Artificiality Continuum
Big Data for International DevelopmentAlex Rascanu
Alex Rascanu delivered the "Big Data for International Development" presentation at the International Development Conference that took place on February 7, 2015 at University of Toronto Scarborough.
1) The document defines AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies, communicate effectively with AI, and use AI as a tool.
2) It proposes 15 competencies across 5 themes - what AI is, what it can do, how it works, how it should be used, and how people perceive it.
3) The competencies focus on understanding intelligence, different types of AI, their strengths/weaknesses, how machine learning and data work, ethics, and interpreting AI systems.
The document provides an overview of an event on emerging trends in data science given by Dr. Joanne Luciano. It discusses the data science workflow and various processes involved. Some key trends highlighted include increased use of AI and machine learning in data management and reporting, growth of natural language processing, advances in deep learning, emphasis on data privacy and ethics. The document also promotes the new minor in data science offered at University of the Virgin Islands, covering required courses and examples of course sequences for different disciplines.
The document provides an overview of data science and what it entails. It discusses the hype around big data and data science, and how data science has evolved due to improvements in technology that allow for large-scale data processing. It defines data science as a process that involves collecting, cleaning, analyzing and extracting meaningful insights from data. Data scientists come from a variety of academic backgrounds and work in both industry and academia developing solutions to real-world problems using data-driven approaches.
Getting started in Data Science (April 2017, Los Angeles)Thinkful
The document discusses the rise of data science and the skills needed for data scientists. It defines data science as the intersection of engineering, statistics, and communication. Data scientists analyze large datasets to answer important business questions. The document uses LinkedIn in 2006 as a case study, outlining how a data scientist there framed questions, collected and processed user data, explored patterns, and communicated results to improve the user experience and growth. It highlights tools like SQL, analytics software, and machine learning that data scientists use and stresses the importance of curiosity, technical skills, and strong communication for those interested in the field.
Algorithmic bias is a complex, ill-defined concept with many facets. It refers broadly to unjust outcomes from algorithms that aim to predict outcomes based on historical data, but the term is unclear and conflates different types, sources, and impacts of biases. Algorithmic bias arises from multiple factors, including the data used to train models (which reflects societal biases), how data is collected and organized over time in complex data pipelines (forgettance), and how predictions are interpreted and used. Understanding algorithmic bias requires examining algorithms as social and epistemological constructs that reflect and can exacerbate existing inequalities in how knowledge is defined and groups are differentiated through probabilistic analysis.
Working with data is a challenge for many organizations. Nonprofits in particular may need to collect and analyze sensitive, incomplete, and/or biased historical data about people. In this talk, Dr. Cori Faklaris of UNC Charlotte provides an overview of current AI capabilities and weaknesses to consider when integrating current AI technologies into the data workflow. The talk is organized around three takeaways: (1) For better or sometimes worse, AI provides you with “infinite interns.” (2) Give people permission & guardrails to learn what works with these “interns” and what doesn’t. (3) Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation.
intro to data science Clustering and visualization of data science subfields ...jybufgofasfbkpoovh
This document provides an introduction to the field of data science. It defines data science as an interdisciplinary field that uses scientific methods and processes to extract knowledge and insights from large amounts of structured and unstructured data. The document discusses what data science is, why it has grown in importance recently due to massive data collection and computing power, and what skills and roles are involved in data science work. It also presents models of the data science process and team composition.
This is the presentation of the Juan Cruz-Benito’s PhD “On data-driven systems analyzing, supporting and enhancing users’ interaction and experience” that was defended on September 3rd, 2018 in the Faculty of Sciences at University of Salamanca Spain. This PhD was graded with the maximum qualification “Sobresaliente Cum Laude”.
This document provides an overview of data science including its importance, what data scientists do, how the field has emerged, and how to become a data scientist. It notes that by 2018 the US could face shortages of people with data analytics skills. It then discusses how LinkedIn's early growth in 2006 exemplifies the data science process of framing questions, collecting and processing data, exploring patterns, and communicating results. Finally, it outlines the tools used in data science like SQL, analytics software, and machine learning and discusses getting started in the field through education, curiosity, and ongoing learning with mentorship support.
This document provides an overview of data science including its importance, what data scientists do, how the field has emerged, and how to become a data scientist. It discusses how data science can help answer important business questions using LinkedIn in 2006 as a case study. It also outlines the typical data science process of framing questions, collecting and cleaning data, exploring patterns, and communicating results. Finally, it introduces some common data science tools like SQL, analytics software, and machine learning algorithms and discusses options for continuing education in data science.
Guest presentation: SASUF Symposium: Digital Technologies, Big Data, and Cybersecurity, Vaal University of Technology, Vanderbijlpark, South Africa, 15 May 2018
University Public Driven Applications - Big Data and Organizational Design maria chiara pettenati
This document discusses improving access to and use of big data for university and public applications. It summarizes the discussions of a working group on this topic. The group examined current approaches to big data, potential future applications, and challenges. Recommendations focus on developing interdisciplinary education programs to train experts, providing open access to large datasets, and establishing frameworks and standards to support big data analysis. The goal is to leverage big data for addressing societal problems in areas like healthcare, transportation and the environment.
Data Science: Origins, Methods, Challenges and the future?Cagatay Turkay
Slides for my talk at City Unrulyversity on 18.03.15 in London. Discuss the term Data Science, touch upon the origins and the data scientist types. A longer discussion on the Data Science process and challenges analysts face.
And here is the abstract of the talk:
Data Science ... the term is everywhere now, on the news, recruitment sites, technology boards. "Data scientist" is even named to be sexiest job title of the century. But what is it, really? Is it just a hype or a term that will be with us for some time?
This session will investigate where the term is originating from and how it relates to decades of research in established fields such as statistics, data mining, visualisation and machine learning. We will investigate how the field is evolving with the emergence of large, heterogeneous data resources. We will discuss the objectives, tools and challenges of data science as a practice, and look at examples from research and industrial applications.
The document reports on a project to develop data visualization techniques for cancer genomics. It was submitted in partial fulfillment of a Bachelor of Design degree. The project was supervised by Dr. Prasad Bokil in the Department of Design at IIT Guwahati from July 2014 to November 2014. The project aimed to address challenges in visualizing large and complex genomic cancer datasets by exploring new visualization methods and prototypes.
This document discusses the importance of data fluency skills in the 21st century. It defines key terms like data science, machine learning, data literacy, and statistical literacy. While these fields require extensive training, the document argues that domain expertise combined with basic data analysis skills can solve many problems. These basic skills include understanding data structures, using programming to interact with data, and exploratory data analysis through visualization. The data analysis process involves defining problems, collecting and preparing data, visualization and modeling, and communicating results. RStudio is presented as a tool that can support the entire data analysis process within a single integrated development environment.
Similar to Editing Digital Imagery in Research: Exploring the Fidelity-to-Artificiality Continuum (20)
Long nonfiction chapters are not in-style and may never have been. Where average chapter lengths of nonfiction book chapters are about 4,000 – 7,000 words in length, some may be several times that max range number. The explanation is that there is some irreducible complexity that that chapter addresses that cannot be addressed in shorter form. This slideshow explores some methods for writing longer chapters while still maintaining coherence, focus, and reader interest…and while using some technological tools to write and edit more efficiently.
Overcoming Reluctance to Pursuing Grant Funds in AcademiaShalin Hai-Jew
Starting as an organization’s new grant writer can be a challenge, especially in a case where there has been a time lapse since the last one left. People get out of the habit of pursuing grant funds. This slideshow addresses some of the reasons for such reluctance and proposes some ways to mitigate these.
Writing grants is one common way that those in institutions of higher education may acquire some funds—small and big, one-off and continuing—to conduct research, hire faculty and researchers and learners and others, update equipment, update or build up new buildings, and achieve other work. This slideshow explores some aspects of the work of grant writing in the present moment in higher education.
Contrasting My Beginner Folk Art vs. Machine Co-Created Folk Art with an Art-...Shalin Hai-Jew
This document contrasts handmade folk art with machine-generated folk art created with an AI system. Handmade art involves material costs, learning over time, and serendipity, while machine art is more efficient but relies on the system's tendencies. Both can be used for self-expression, stress relief, and entertainment. However, handmade art may better support poetry, visual exploration, and thinking while machine art excels at structure, cultural references, and finding online audiences. The author views machine-assisted art as a collaboration that should augment but not replace manual skills.
Creating Seeding Visuals to Prompt Art-Making Generative AIsShalin Hai-Jew
Art-making generative AIs have come to the fore. A basic work pipeline typically involves starting with text prompts -> generated images. That image may be used to seed further iterations. Deep Dream Generator (DDG) enables the application of “modifiers” of various types (artist styles, visual adjectives, others) to be applied in addition to the text prompt.
Another approach involves beginning with a “seeding image,” a born-digital or digitized (born-analog) visual on which AI-generated art may be based for a multi-channel and multi-modal prompt. This slideshow provides some observations of how to think about seeding images, particularly in terms of how the DDG handles them, with its “algorithmic pareidolia” (“Deep Dream,” Wikipedia, July 3, 2023).
Human art-making is often about throwing mass-scale conversations. Artists are thought to help bridge humanity into the future. Whether generative AI art enables this or not is still not clear.
Multimodal “Art”-Making Generative AIs
Generative AI encompasses a broad range of computational technologies that emulate human intelligence across many domains including natural language processing, speech recognition, vision systems, gameplay, art creation, decision making, robotics and more. Generative AIs can be prompted through text, images or other modalities to create novel works based on their training data. Prompt engineering involves refining prompts to steer the AI's output. While generative AIs show promise for human-machine collaboration and art-making, challenges remain regarding factuality, derivative works, and achieving refined output.
Digital templates can provide structure for inputting information and also enable additional functionality like autocompletion, auto-correction, and dynamic layouts. Templates may be shared broadly and used in various applications. They are designed forms that can be created using a top-down or bottom-up approach and should be tested and evolved over time. Common examples of templates in higher education include forms, organizers, manuscripts, slideshows, videos, and digital learning objects.
In qualitative data analytics, computation is seen as complementing the work of human researchers by bolstering data analysis. Qualitative data analysis tools enable various types of computational analysis of both structured and unstructured data, including text analysis, visualization, and machine learning techniques. However, human researchers still play an important role in curating data and developing codebooks to guide both human and computational analysis of the data.
Common Neophyte Academic Book Manuscript Reviewer MistakesShalin Hai-Jew
1) Academic book reviewing is a common but often unpaid volunteer role that requires experience to avoid mistakes.
2) Neophyte or inexperienced reviewers must understand publishing context, ask relevant questions of manuscripts, and maintain impartiality and confidentiality.
3) Reviewers should approach their role with empathy, recognizing authors' challenges and investing time in preparation, while upholding quality standards to benefit authors, publishers, and disciplines.
Fashioning Text (and Image) Prompts for the CrAIyon Art-Making Generative AIShalin Hai-Jew
CrAIyon (formerly DALL-E after Salvador “Dali”) is a web-facing art-making generative AI tool online (https://www.craiyon.com/) that enables the uses of text (and image) prompts for the creation of watermarked, lightweight visuals. Counterintuitively, the rough visuals are much more usable for recombinations and remixes and recreations into usable digital visuals for various digital learning objects. The textual prompts are not particularly intuitive because of how the generative AI program was trained on mass-scale visuals). There is an art and occasional indirection to working prompts after each try, with the resulting nine-image proof sheets that CrAIyon outputs. The tool can be used iteratively for different outputs.
The tool sometimes turns out serendipitous surprises, including an occasional work so refined that it can be used / shared almost unedited. One challenge in using CrAIyon comes from their request for credit (for all non-subscribers to their service). Another comes from the visual watermarking (orange crayon at the bottom right of the image). However, this tool is quite useful for practical applications if one is willing to engage deep digital image editing (Adobe Photoshop, Adobe Illustrator).
Augmented Reality in Multi-Dimensionality: Design for Space, Motion, Multiple...Shalin Hai-Jew
Augmented reality (AR)—the use of digital overlays over physical space—manifests in a wide range of spaces (indoor, outdoor; virtual) and ways (in real space (with unaided human vision); in head gear; in smart glasses; on mobile devices, and others). There are various authoring technologies that enable the making of AR experiences for various users. This work uses a particular tool (Adobe Aero®) to explore ways to build AR for multiple dimensions, including the fourth dimension (motion, changes over time).
Based on the respective purposes of the AR experience, some basic heuristics are captured for
space design (1),
motion design (2),
multiple perception design (sight, smell, taste, sound, touch) (3),
and virtual- and tangible- interactivity (4).
The document provides an overview of the Adobe Aero training session, including pre-training, during training, and post-training steps. It then details the two hours of training, which include an introduction to augmented reality and the Adobe Aero app. Key concepts around AR like file types, scale, field of view, interaction design, and uses for teaching and learning are explained. The document outlines a simplified workflow for designing mobile AR experiences for education.
Some Ways to Conduct SoTL Research in Augmented Reality (AR) for Teaching and...Shalin Hai-Jew
One of the extant questions about augmented reality (AR) is how (in)effective it is for the teaching and learning in various formal, nonformal, and informal contexts. The research literature shows mixed findings, which are often highly context-based (and not generalizable). There are some non-trivial costs to the design/development/deployment of AR for teaching and learning. For the users, there is cognitive load on the working memory [(1) extraneous/poor design, (2) intrinsic/inherent difficulty in topic, and (3) germane/forming schemas]. For teachers, there are additional knowledge, skills, and abilities / attitudes (KSAs) that need to be brought to bear.
Exploring the Deep Dream Generator (an Art-Making Generative AI) Shalin Hai-Jew
The Deep Dream Generator was created by Google engineer Alexander Mordvintsev in 2014. It has a public facing instance at https://deepdreamgenerator.com/, which enables people to use text prompts and image prompts (individually or in combination) to inspire the art-generating generative AI to output images. This work highlights some process-based walk-throughs of the tool, some practical uses, some lightweight art learning, some aspects of the online social community on this platform, and other insights. Some works by the AI prompted by the presenter may be seen here: https://deepdreamgenerator.com/u/sjjalinn.
(This is the first draft of a slideshow that will be used in a conference later in the year.)
Augmented Reality for Learning and AccessibilityShalin Hai-Jew
Recently, the presenter conducted a systematic review of the academic literature and an environmental scan to learn how to set up an augmented reality (AR) shop at an institution of higher education. The ambition was to not only set up AR in an accessible and legal way but also be able to test for potential +/- effects of AR on teaching and learning. The research did not go past the review stage, because of a lack of funding, but some insights about accessibility in AR were acquired.
(The visuals are from Deep Dream Generator and CrAIyon.)
Engaging Pixabay as an open-source contributor to hone digital image editing,...Shalin Hai-Jew
This slideshow describes the author's early experiences with creating two accounts on Pixabay in order to advance digital editing skills in multimedia. The two accounts are located at https://pixabay.com/users/sjjalinn-28605710/ and https://pixabay.com/users/wavegenerics-29440244/ ...
This work explores four main spaces where researchers publish about educational technology: academic-commercial, open-access, open-source, and self-publishing.
Human-Machine Collaboration: Using art-making AI (CrAIyon) as cited work, o...Shalin Hai-Jew
It is early days for generative art AIs. What are some ways to use these to complement one's work while staying legal (legal-ish)?
Correction: .webp is a raster format
Getting Started with Augmented Reality (AR) in Online Teaching and Learning i...Shalin Hai-Jew
University creative shops are exploring whether they can get into the game of producing AR-enhanced experiences: campus tours, interactive gaming, virtual laboratories, exploratory art spaces, simulations, design labs, online / offline / blended teaching and learning modules, and other AR applications.
This work offers a basic environmental scan of the AR space for online teaching and learning, and it includes pedagogical design leads from the current research, technological knowhow, hands-on design / development / deployment of learning objects, and online teaching and learning methods.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
Editing Digital Imagery in Research: Exploring the Fidelity-to-Artificiality Continuum
1. Editing Digital Imagery
in Research:
Exploring the Fidelity-to-Artificiality Continuum
Dr. Shalin Hai-Jew
Kansas State University
CHECK 2021
May 20, 2021
3. Various Junctures at Which Errors May be
Introduced (Awares or Unawares) (and Magnified)
3
• Project Setup: literature review, research design, team seating, work
delegation (and crediting), research oversight, representations to
funding agencies
• Project Execution: research, fieldwork; data capture; data recording;
data archival; data cleaning; data storage; data analysis; data
representations; technologies; resources
• Reportage: conference presentations; publications; data sharing
• Post-Release Vetting: double blind peer review; data review; follow-
on studies; administrative review
4. Common Risks and Challenges to Research
Integrity
• in a context of…
• career (non)survival;
• time/budget/equipment limits;
• limited tools and limited resources;
• difficult and complex work in a complex environment;
• competing colleagues who seem to be doing better;
• competition and mutual advantage-taking;
• impression management, etc.
4
5. Common Risks and Challenges to Research
Integrity (cont.)
• Dishonesty, Over-Claiming,
Misrepresentations, Exaggerations
• Inappropriate Delegations and
Handoffs (~ ghostwriting; data
analytics as “scut work”;
commercial pre-written papers)
• Poor Work / Unskilled Work /
Rushed Work / Corner Cutting /
Carelessness / Incomplete Work
• Non-Expertise / Insufficient Skill
• Incorrect Data Cleaning and / or
Data Removal
• Conflicts of Interest: Nepotism,
Bribe-Taking
• Staging, Re-enactment, Enactment
(in In Vitro and In Vivo Research)
• P-hacking / venue shopping
• Rejecting Unexpected Research
Results
• Sabotage (acts of malice)
• Data Corruption / Data Alteration
5
6. Common Risks and Challenges to Research
Integrity (cont.)
• Data Fabrication
• Plagiarism / Derived Works /
Lack of Originality (and Non
Crediting of Others)
• Credit-Usurpation Free Riding
• Data Leakage or Mishandling
(confidentiality, PII, anonymity,
NDAs, and others)
• Premature Release of Research
• Misappropriation of others’
ideas and works, published or
not, including from privileged
communications
• Funds Misuse
• Real-World Contingencies and
Accidents and Losses (and
Mitigations or Non-Mitigations)
6
7. Common Risks and Challenges to Research
Integrity (cont.)
• Poor Data Stewardship
(technological obsolescence; no
access to the needed data / poor
data availability; poor data
integrity; poor data
confidentiality, and others)
• Lack of a data management plan
• Non-management of data per the
data management plan
• Non-preservation of digital data
into digital “forever”
• Publishing Mills, Conference
Mills, Etc.
• Fake Reviewers (including
Impersonators of Persons in the
Field), Fake Double-Blind Peer
Reviews
• ...and (many) others
7
8. Some Highlights from Prior Slides
• There are many complex steps in the research sequence, and errors
may be introduced at any step.
• Every member of a team matters. Each has to hold up his/her own
responsibilities, and each has to hold up each other effectively (even
if this means contravening social conventions to call out others clearly
and with respect).
• Leadership matters.
• Review occurs in the present; it occurs forwards and backwards in
time. As more up-to-date techniques and technologies emerge, prior
works can be checked against newer knowledge with more cutting-
edge approaches. Truth outs.
8
9. Academics and Fraud
• One survey study examined actions that distort scientific knowledge
but that do not include plagiarism (using others’ ideas without
crediting them) and other forms of research misconduct.
• A minority of the respondents, 1.97%, admitted to have “fabricated,
falsified or modified data or results at least once” and “up to 33.7%
admitted other questionable research practices” (Fanelli, May 29,
2009).
• And: “14.12%” of survey respondents said that their colleagues
engaged in data falsification and “up to 72% for other questionable
research practices” (Fanelli, May 29, 2009).
9
10. Intrinsic and Extrinsic Risks for Researchers
Intrinsic Risks
• Personal ego
• Self-deception
• Particular Dark Triad personality
dimensions
• Dated skills, especially in a
context of high aspiration and
high imagination.
• Dated knowledge of standards
Extrinsic Risks
• One’s social network (depending
on who is in it and what they
think and how they behave)
• Poor leadership (micro, meso,
and macro levels)
• Poor or de-toothed oversight
• Budget and financing drought
• A lack of integrity culture
10
11. Intrinsic and Extrinsic Risks for Researchers
(cont.)
Intrinsic Risks
• Loose handling of assertions
• Likewise, loose handling of
research data and image artifacts
• An inability to handle external
pressures with reasoned
responses
Extrinsic Risks
• Excessive pressure to perform
• Available technologies for digital
image editing
Some cultures that involve
ignoring image manipulations
11
12. Intrinsic and Extrinsic Risks for Researchers
(cont.)
Intrinsic Risks
• A personal resistance to
reaching out to others for advice
and support
• A lack of sufficient internalized
trained professional ethics
Extrinsic Risks
• Monetary incentives
• Reputational incentives [being a
“star” researcher may mean a
heightened likelihood of
research misbehavior but a
lower likelihood “to be caught
than average scientists”
(Lacetera & Zirulia, 2009, p.
568)]
12
14. Positive Control on Digital Image Editing in
Research Context
Self-Deception
• Misunderstanding the image
data
• Falling for spurious data
Other-Deception
• Misreporting the image data
• Emphasizing particular parts of a
digital image that results in a
lack of a “pure data stream”
(Anderson, Jan. 21, 1994)
14
15. Session
Research is a critical part of work and study in higher education. There
are a raft of professional standards about how research may be
conducted, to protect humans (and animals) in the processes…and to
ensure accuracy and non-bias in the work. There are rules for data
handling, so as to avoid potential human mistakes and / or
manipulations in their data handling, data cleaning, and other efforts.
(You can engage in data exploration but need to avoid p-hacking or
seeking statistical significance from the data and reverse-engineering a
hypothesis post hoc as if it were an a priori one. You can clean data by
dropping outlier data points, but you cannot actively work to skew
data.)
15
16. Session (cont.)
• Some research studies use various visuals in their work: photos (macro
and micro), scans, screen captures, video stills, and others. The visuals are
affected by the state of the analog world, the capturing devices and
technologies, the parameter settings on the various devices, the skill of the
persons, and other factors. By the time the researcher or student has an
image for digital image editing, there may be various challenges: focus,
color balance, depth of field, lighting, and others. The question is: How
much more digital image editing can be done to that and still maintain
fidelity (given that the image contains research data and is research data in
most cases) and not lapse into artificiality (and potentially, fraud). How can
researchers stay as close to high-fidelity and true as possible? What are
the right approaches in an age of controlling images down to the pixels (in
raster imagery), artificial intelligence (AI)-enhanced digital image editing,
“deep fakes,” and counter measures against such (with built-in forgery
detections)?
16
17. Session (cont.)
1. Is it fair to raise the image resolution for print? (by enabling the software
to interpolate missing pixels)
2. Is it fair to change the white color balance?
3. Is it fair to change the lighting through artificial means?
4. Is it fair to zoom and crop for particular focuses/foci? Is it fair to rotate
or flip or skew / tilt / lean a visual?
5. Is it fair to remove visual information and replace with filler pixels?
6. Is it fair to move an object within the image to another location? Is it fair
to resize?
7. Is it fair to apply a diagnostic color filter to highlight particular insights?
17
18. Session (cont.)
8. Is it fair to mask a visual? This means selectively hiding some parts of an
image and selectively revealing or highlighting others.
9. Is it appropriate to compose / composite / fuse a visual or create whole
combined images from various other images, in pieces and parts? In
what visual representational contexts?
10. How much explanation should go into the descriptions of complex
imagery? How much depth?
11. Is it fair to use selective language to point to some aspects of an image
(as research data) but not others? Is it fair to avoid contravening
evidence to one’s (pet) hypothesis?
12. Is it fair to “batch process” a number of mostly similar images but with a
few that do not meet the basic parameters of the set?
18
19. Session (cont.)
13. If the researcher or team have a particular aesthetic preference for the
visuals (or a branding message preference), how much can they express
this in their digital imagery (as data)?
14. With the advent of machine learning and AI and their integration in
Adobe Photoshop 2021, how much should a researcher use these
features? Skin smoothing? Facial expression tweaking (using neural
filters)? Artificial art style transfers? Others?
• This work will use Adobe Photoshop 2021 to show some of the capabilities
of the tool and how these can be used in alignment with “true” and “non-
true.” (Various fields and disciplines may have different standards for what
edits may be made ethically in the profession.)
19
20. Self Intros
• Welcome!
• Who are you? What space do you work in? What digital images do
you handle?
• What are your experiences with Adobe Photoshop?
• What are some topics you would want addressed in this session?
20
21. Some Thoughts from the Field
• “Augmentation of digital images is almost always a necessity in order
to obtain a reproduction that matches the appearance of the original.
However, that augmentation can mislead if it is done incorrectly and
not within reasonable limits. When procedures are in place for
insuring that originals are archived, and image manipulation steps
reported, scientists not only follow good laboratory practices, but
avoid ethical issues associated with post-processing, and protect their
labs from any future allegations of scientific misconduct..”
• -- Jerry Sedgewick in “Acquisition and Post-Processing of
Immunohistochemical Images” in Signal Transduction Immunohistochemistry
(2017, p. 75; Ch. 4)
21
22. Some Thoughts from the Field(cont.)
• “…when procedures are in place for correct acquisition of images, the
extent of post processing is minimized or eliminated” (such as for
white balancing…tonal values in dynamic range…noise
elimination…bitrates…and others)
• -- Jerry Sedgewick in “Acquisition and Post-Processing of
Immunohistochemical Images” in Signal Transduction Immunohistochemistry
(2017, p. 75; Ch. 4)
22
23. Some Thoughts from the Field(cont.)
• “Concerned not so much with intentional fraud, but rather with
routine and ‘innocent’ yet inappropriate alteration of digital images,
several high-profile science journals have recently introduced
guidelines for authors regarding image manipulation, and are
implementing in-house forensic procedures for screening submitted
images.”
• -- Emma K. Frow in “Drawing a line: Setting guidelines for digital image
processing in scientific journal articles” in Social Studies of Science [2012,
42(3), 369 – 392 (on p. 369)]
23
24. Some Thoughts from the Field(cont.)
• “In journals that check figures after acceptance, 20 – 25% of the
papers contained at least one figure that did not comply with the
journal’s instructions to authors. The scientific press continues to
report a small, but steady stream of cases of fraudulent image
manipulation.”
• -- Douglas W. Cromey’s “Digital images are data: And should be treated as
such” in Methods in Molecular Biology (2013, 931, 1 – 27)
24
26. Pristine Master Set
• The research team maintains raw original photos and scans in the
highest resolution in a pristine master set.
• This enables reversion to the original by the research team. (Some may want
to “discard” the originals once more refined visuals are available, but that is
not advisable. Anything discarded in an irretrievable way means an
irrecoverable loss of data.)
• Enable a practical “undo.”
• This capability is somewhat mooted if something already goes to publication
in a published imageset or publication.
26
27. Proper Initial Image Capture / Acquisition
• Digital image editing can only do so much for a poorly captured initial
image (although the technologies are improving, and Adobe
Photoshop just came out with Super Resolution in March 2021, which
relies on AI to interpolate additional pixels to an image for very high
artificial-enhanced fidelity).
• Ideally, the original image should be as information-rich as possible.
• Artificial pixels are not appropriate to suggest into a research image
since the AI does not understand the context of the image, in many
research cases. Added pixels may be misleading, and these pixels
may be confused with actual data.
27
28. Provenance / Lineage of Imagery
• All visuals use have an established provenance (or lineage).
• It is clear where they came from and how they were acquired.
• There has been a clear “chain of custody.” Along the way, who can
touch and influence the visuals / image data?
• It is clear how they have been handled since they were acquired.
• It is clear if they went through any conversions or transforms or image
compressions.
28
29. Strong Foundations to Arrive at the Visual
• If visual is about a model or a hypothesis or a framework or a
concept, then the foundation for that should be solid. The theorizing
should be logical. The underlying data should be solid.
• If the visual is from underlying data, then the acquisition, cleaning,
analysis, and visual representation of that data should be solid.
• If the visual is from sourcing, that source should be solid.
• In other words, where a visual came from should offer a solid basis
for the depiction. The depiction itself should follow correct
conventions for representation. It should not be misleading or easily
mis-interpretable.
29
30. About Controlling for Clarity
• Parts of the visual should be properly labeled.
• The parts of the visual should follow image conventions so as not to
confuse users.
• Surrounding information about that visual should be accurate. This
would be the text, other visuals, titling, and captioning.
• Proper sizing measures and dimensions should be indicated as
relevant.
• Colors should be properly balanced.
30
31. About Controlling for Clarity (cont.)
• Legends should be accurate.
• The visual should be controlled for all possible intended and
unintended uses of the visuals.
• Control for sins of commission and for sins of omission. Avoid
suggestiveness. Avoid inaccurate assertions. Avoid omitting context.
Avoid outsized claims.
• Make sure all digital and informational contents are accessible (across
a range of perceptual and brain processing capabilities).
31
32. About Controlling for Uncertainty
• Uncertainty should be represented accurately. Assumptions should
be represented accurately.
• If an amount of uncertainty can be represented, that amount should
be accurate and indicated.
32
33. Understanding the Audience
• It helps to be able to understand the “interpretive lens” of the
audience and how they will consume the visuals.
• Some challenges arise with a larger audience with a wide range of
individuals of differing backgrounds and swaths of non-expertise in
the space.
• Other challenges arise when a visual is separated from the original
context and is not supported by augmenting and complementary
information.
• Or similarly, there may be challenges when the visual is consumed in a stand-
alone way even if it has not been separated from other informational
contents.
33
35. 1. Image Resolution and Sharpening
• Sometimes, particular details may not be clear.
• A digital image editing tool enables the addition of artificial pixels for
higher resolution and even “super resolution.”
• Ideally, raster images should have sufficient bits to map the respective
images. (In some contexts, much higher resolution is required.)
• The color should be at least 16-bit to 24-bit color (true color), per pixel.
• If something is hand-drawn with technologies or born-digital, use vector
representations for scalability without lossiness.
• Save images in non-lossy formats.
• Question: Is it legit to interpolate pixels for a low-resolution image? If so, why and
when? What if the interpolation does add artifacts? (The “smart” assumptions of
the AI behind such interpolates may result in distortions and some muddiness and
some digital artifacts.)
35
36. 1. Image Resolution and Sharpening(cont.)
• The software editing tool enables the use of AI to sharpen edges
within a certain identified “radius” of the identified edges.
• Another method may be to heightened contrast…or even change up
the hue to make a more contrastive look and feel.
• Sometimes, removing color and offering a visual in b/w or grayscale
can heightened focus on the lines / edges. It can heighten the sense
of shapes.
• Question: Is it legit to “smart sharpen” an image? Or find edges? Or increase
contrast? Or render the image in b/w or grayscale? If so, why and when?
Within what limits?
36
37. 2. White Color Balance
• For truer color, and to control against too much warmth (yellow) or
cold (blue) in imagery, photos should be balanced for more neutral
tones. Setting the correct “white” is one approach.
• For print, the specular highlights should be a little muted, and the
shadows should be somewhat lightened because of ink bleed. (These
are the “curve” adjustments.)
• Question: For a print context, can colors be adjusted and “jumped” to
represent accurately in print? (CMYK) If so, why and when? By how much?
• Question: What about using color to drive attention to particular part of an
image? In a labeled way? An unlabeled way?
37
38. 3. Artificial Lighting
• In post-production, it is possible to change various lighting effects on
an image.
• Particular focal regions may be lit more to draw the human eye.
• The midtones (the brightness and colors between the highlights and
the shadows) should be of sufficient detail for texture. (This is seen in
the histogram in Photoshop.)
• Question: Should “brightness” and “contrast” be adjusted to a research
image? If so, why and when? By how much?
• Question: Should the histogram be adjusted in a research image? If so, why
and when? By how much?
38
39. 4. Zooming and Cropping and Rotation and
Flipping and Skewing
• Sometimes, especially in fieldwork and “the wild,” it is hard to control
for photography and digital image capture.
• Sometimes, there are challenges in lab-based image captures as well.
• Sometimes, what is in-frame might be extraneous to the focus of the
researcher(s).
• Question: Is it appropriate to zoom in an image? If so, why and when? By
how much?
• Question: Is it appropriate to crop an image? If so, why and when? By how
much?
39
40. 4. Zooming and Cropping and Rotation and
Flipping and Skewing (cont.)
• Rotating an image involves changing the original frame of the image by
turning the image clockwise or counter-clockwise by various degrees.
• Flipping an image along the vertical or horizontal axes involves changing
the perspective of the original image. These move where objects were
originally, in a sense, within the context of the image.
• Skewing an image involves tilting or leaning it.
• Question: Is it appropriate to rotate an image? If so, why and when? By how much?
• Question: Is it appropriate to flip an image? If so, why and when? By how much?
• Question: Is it appropriate to skew or tilt an image? If so, why and when? By how
much?
40
41. 5. Removal of Visual Information, Filler Pixels
• Sometimes, especially in fieldwork and “the wild,” it is hard to control
for photography and digital image capture.
• Sometimes, there are challenges in lab-based image captures as well.
• Sometimes, the visual does depict what the researcher wants with
sufficient emphasis.
• Question: Is it appropriate to erase information in the visual that is
distracting? (Is it okay to remove “noise”? “Texture”?) And then substitute
something else? If so, why and when? By how much?
• Question: Is it appropriate to use filler pixels to fill in particular parts that one
has cut out? Is it appropriate to use a cloning tool? A patch tool? A spot
healing tool? Is it appropriate to make your own texture and apply it to the
visual? If so, why and when? By how much?
41
42. 6. Moving an Object / Resizing an Object
• Sometimes, especially in fieldwork and “the wild,” it is hard to control
for photography and digital image capture.
• Sometimes, there are challenges in lab-based image captures as well.
• Sometimes, objects may occlude particular points of interest.
• Perhaps artifacts may have been accidentally introduced in a photo or
a scan. Perhaps there may be other kinds of visual “noise.”
• Sometimes an object looks subjectively wrong size-wise.
• Question: Is it appropriate to select an image (Select subject? Lasso tool?
Marquee tools?) and cut it out of the picture and replace it with an alpha
channel or an empty background? Move its location? Change its size? If so,
why and when? By how much?
42
43. 7. Diagnostic Color Filters (for Analysis)
• Digital means have been used for various analyses of the images.
• Some of these means may leave residuals on the current imagery.
• Question: Do you have to return to an original image and use that, or can you
use the digital image that may have residual layers or tinting or other effects?
If so, why and when? By how much?
43
44. 8. Masking (Selective Hiding; Selective
Revealing)
• Other digital image editing enables driving human visual focal
attention to parts of an image (via masking or hiding parts of an
image, via blurring, via “feathering” as a type of blur)…
• Masking involves the uses of layers to applied different filter, lighting, color,
and other effects.
• Blurring serves to “hide” details of information and drive attention elsewhere.
• Question: Is masking appropriate?
44
45. 9. Compositing / Combining / Fusing
• A combination of capabilities using layers enables emplacement of
pieces and parts of digital snippets to make wholly new (appearing)
visuals.
• Compositing generally refers to combining pieces and parts of multiple
images to create a semi-coherent / coherent new whole.
• Question: Is compositing / combining / fusing legit? If so, why and when? In
what contexts? (Maybe in the depiction of fictional or imagined scenarios?
In particular computational image analysis sequences?)
45
46. 10. Explanatory Depth re: Complex Imagery
• Some images and visuals are highly complex.
• The details in such are always finite and limited.
• Sometimes, it may take several visuals to explain a concept.
• Making a visual explanatory, even if it is separated from the original
slideshow or paper, requires more work.
• Question: Should the researcher or research team make the effort to make
the visual clear in the slideshow / paper? If so, why and how? What sorts of
alt text should be included with each visual?
46
47. 11. Selective Explanatory Language
• A set of research visuals seem to contravene the researcher’s hypothesis.
• These findings may sink the hypothesis that the researcher has posited years ago and
spent years trying to explore (and maybe to support in his/her heart of hearts).
• The presentation of research is always somewhat selective. After all, not
everything can be shared. Some aspects of the research are more relevant
than others.
• Question: Is it fair to avoid contravening data? Is it fair to withhold information and
not share that when publishing? If so, why and when? By how much?
• Question: Or should the contravening data just be included in the footnotes? If so,
why? How?
47
48. 12. Batch Processing with Macros
• Automatic image processing is highly helpful in many circumstances where
there is a large number of visuals to process simultaneously…and for which
a known sequence of image handling has been designed (and expressed as
macros or as small programs).
• Automation is important for consistency, for controlling against human
error, and for efficiencies, among others.
• However, sometimes, not all images may meet the requirement that they
are of a type and meet particular criteria or have basic parameters.
• Question: Is it fair to run the whole set in a batch even if some of the images that do
not fit the criteria are included? If so, why and when? What are some other ways to
engage this issue?
48
49. 13. Aesthetics and / or Branding
• The researcher or research team may have preferences for particular
1. image aesthetics (look and feel) or
2. branding (messaging about the organization or team of work project).
• Perhaps the funding agency wants particular presentational
aesthetics and / or branding.
• Question: Is it fair change up the visuals in a research work to align with
particular aesthetics? If so, why and when? What are some other ways to
engage this issue?
• Question: With particular branding messages? If so, why and when? What
are some other ways to engage this issue?
49
50. 14. Machine Learning and AI Features
• Adobe Photoshop 2021 enables machine learning and AI features,
many of which are very smooth.
• It is possible to smooth skin.
• It is possible to change up facial expressions of people in a photo.
• It is possible to apply art styles fairly seamless from a preset work to
another.
• Question: When is it appropriate to use AI neural filtering and other features
to look better to others? Is it appropriate to use the neural filtering to change
up the facial expressions of a professional adversary to make them look
worse? If so, why and when? What are some other ways to engage this issue?
50
53. Authentication Methods
• Various disciplines have their own methods for authenticating
imagery from metadata (geotags, contextual information captured by
camera), from image forensics, from image comparisons, and other
singular and mixed approaches.
• Failing authentication is one way to be found out.
• It is one way to rouse the ferrets.
53
54. Ways to Get Found Out…by People
• You can tell on yourself…
• You can tell on yourself with what you assert (privately and publicly).
Mistruths, contradictions, and slippage can be revelatory.
• You can tell on yourself with the digital / digitized imagery that you
shared under your name.
• By being in the “chain of custody” and vouching for the provenance of
the information, you are affirming the apparent validity of the
contents.
54
55. Ways to Get Found Out…by People (cont.)
• Your colleagues can tell on you. Colleagues are competitive, and they
are on the lookout for fumbles.
• Research and publishing are gauntlets. People check each other out
and check out each other’s works.
55
56. Ways to Get Found Out…by Image Forensics
• Your imagery can tell on you.
• Imagery is multi-dimensional and complex. It is revelatory in ways that most are not
aware.
• There are a number of image forensics tools for automated identification of
edited digital images (especially in 2D and some now in 3D), including the
edits that may indicate fraud.
• There are physics-based methods (Riess, 2017). Light falls a certain way
based on universal rules. Reflectance as a normalized phenomenon can be
used to identify anomalies (Riess, Pfaller, & Angelopoulou, 2015).
• There are programs that can identify cameras that were used to take
images, identify the social network platform an image came from, and the
software used to upload the images (from the images alone) (Giudice,
Paratore, Moltisanti, Battiato, 2017) and so reconstructing the history of an
image.
56
57. Ways to Get Found Out…by Image Forensics
(cont.)
• Some technologies enable the identifying of tampered regions of a digital
image.
• One approach involves using “linear local features” to identify “copy-move forgery
detection” (Kuznetsov & Myasnikov, 2017, p. 305).
• Various wavelet analyses approaches are also used to identify regions of
interest for anomalies.
• There are programs that detect anomalies in the color gamut, in grayscale,
in gamma ranges, and others.
• Histogram normalization (in distribution) can bring out anomalies in brightness /
darkness.
• There are ways to separate out colors in RGB and others that enable
identification of anomalous regions.
57
58. Ways to Get Found Out…by Image Forensics
(cont.)
• There are validation approaches:
• Another approach is an “image hashing” one with “compressive sensing” to
validate (Sun & Zeng, 2014).
• Watermarking (an older embedded technology) is another common
approach.
• Digital signatures is another active method for forgery detection.
• Blockchain technologies are being used as well to establish “original”
works as one-of-a-kind. By elimination, all other unvalidated
versions are then off-true (and fakes).
58
59. Ways to Get Found Out…by Image Forensics
(cont.)
• The visuals on the Web and Internet are mapped, and it is possible to
reverse-image-search them (to the tune of tens of billions).
• Such setups can be done in all fields for published research. This provides a
sense of institutional memory, against which new works may be compared
computationally and non-computationally.
59
60. Ways to Get Found Out…by Image Forensics
(cont.)
• A number of digital image manipulation detection web services are
coming online according to a number of academic research articles.
• This means that publishers and others may put into place fairly efficient
vetting.
• This also means that people do not particularly need to have a special interest
in you to find you out. The computational costs will be minimal to acquire
that information.
• Retouches, doctoring, and other digital image manipulations can be
eminently seeable and empirically established.
60
61. Older Image Forensics to Check Images
• The Office of Research Integrity has some Image Forensics tools for
researcher use:
• Forensic Droplets
• Advanced Forensic Action set
• The above is apparently out-of-date and uses a much older version of
Adobe Photoshop.
• Other image forensics tools have long replaced these.
• These are still available by email to the organization, according to the
current website.
61
62. Ways to Get Found Out in 3D
• There have been advances in the 3D space to test “if a 3D point cloud
generated from a LiDAR scan has been subsequently manipulated”
(for potential usage in law enforcement) (Ponto, Smith, & Tredinnick,
2019, p. 101)
• The methods include identifying “discontinuities on octant boundaries”
(Ponto, Smith, & Tredinnick, 2019, p. 104), density gradient analysis (but
complicated due to combination of multiple such scans into a composite for
the scenes) (p. 103), and “spherical sampling” (p. 105)
62
64. So…What is Arrayed Against the Artificial?
• Such artificial imagery may be caught in any phase: research,
presentation, peer review, publication, post-publication.
• Where there’s one misappropriation on the surface, people will look for a lot
more underneath.
• There is a “market” for calling out others, in part, to keep the research stream
more accurate and to protect the discipline.
64
65. So…What is Arrayed Against the Artificial? (cont.)
• Policy regimes have been set up against research fraud with severe
penalties, as a deterrence against such actions.
• The social norms around this practice can be unforgiving.
• For some, engaging in fraud is crossing a Rubicon.
• Research integrity is built into various curricula.
65
66. So…What is Arrayed Against the Artificial? (cont.)
• There are image recognition technologies powered by artificial
intelligence (for exact searches, for similarity ~ searches, for discovery,
for image curation, and others).
• There is computational memory of what has already been shared in
the “research stream” (published research), in the WWW and
Internet, in repositories and in referatories, and others, etc.
• There are known patterns of image fraud that have been identified
and mapped. There are “tells” or “indicators” that show
manipulations.
66
67. Range of Negative Outcomes Possible from Data
Falsification or Manipulation or Fabrication
Macro Level
• Public confidence in publicly-
funded research can be at stake.
• Grant funding can be at stake.
• A discipline may be harmed.
• University reputation may be
harmed.
• Grant funding agency reputation
may be harmed, etc.
Micro
• Published papers may be retracted,
and various digital libraries and
archives keep public records of
such retractions.
• Discovery of the data falsification
may be career ending.
• Professional reputations may be
ruined.
67
Meso Level
68. Some New Thinking and Precautions
• Get out of the mindset of absolute refinement of images and some
“perfection,” because that can lead to image edits that can be
negative to image integrity (and turn the image to artificial). [Don’t
use selfie standards for your research imagery.]
• Support your discipline in engaging a norm of the real vs. the prettified faux
real. Create a new social-professional norm.
• If the image capture did not work the first time, do it again the right
way.
• Be aware of all the implications of digital image editing on each
visual…and control for that. Be careful of processes masked in batch
processing and sequences.
68
69. Thinking and Acting Strategically for “Long
Term” Considerations
• This slideshow’s scenario has a “discrete-time” approach.
• Over time, however, a solid pristine imageset as data can be used
potentially for other exploratory research and analysis. This follow-on
analysis may be done with new techniques and new technologies.
• Having a raw pristine set of digital imagery may be informative for other as-
yet not-conceptualized analyses.
• Capturing such information in all likelihood involved much investment
of time, equipment, resources, expertise, and other inputs. Not
preserving a pristine master set of visuals would be a losing (or
“dominated”) strategy in game theory.
69
70. Some Pseudo- “Defenses” / “Excuses”
• Intentionality:
• I wasn’t intending to mislead
• I was trying to convey an accurate view in the new medium or modality
• I was trying to clean the image
• I was trying for a more aesthetically pleasing image
• Insufficient training: Nobody told me
• Lack of control on recipient understandings:
• I can’t control how information is received and interpreted
• Others
70
71. Starting to Drift?
• If you believe that there is perfect image data and that some edits will
get you there…
• If you feel like putting a thumb on the scale…
• If you believe that you can do this and slide under the radar…
• If you feel yourself starting to drift towards image manipulation…
• …what should you do?
• …how do you get back to true?
71
73. References
• Anderson, C. (1994, Jan. 21). Easy-to-alter digital images raise fears of
tampering. Science, 263(5145), 317 – 318.
• Fanelli, D. (2009, May 29). How many scientists fabricate and falsify
research? A systematic review and meta-analysis of survey data.
PLoS ONE 4(5): e5738.
https://doi.org/10.1371/journal.pone.0005738.
• Giudice, O., Paratore, A., Moltisanti, M., & Battiato, S. (2017,
September). A classification engine for image ballistics of social data.
In International Conference on Image Analysis and Processing (pp.
625-636). Springer, Cham.
73
74. References (cont.)
• Kuznetsov, A., & Myasnikov, V. (2017). Using efficient linear local
features in the copy-move forgery detection task. In International
Conference on Analysis of Images, Social Networks and Texts (pp. 305-
313). Springer, Cham.
• Lacetera, N., & Zirulia, L. (2009). The economics of scientific
misconduct. The Journal of Law, Economics, & Organization, 27(3),
568 – 603.
• Ponto, K., Smith, S., & Tredinnick, R. (2019). Methods for detecting
manipulations in 3D scan data. Digital Investigation, 30, 101-107.
74
75. References(cont.)
• Riess, C. (2017, September). Illumination analysis in physics-based
image forensics: A joint discussion of illumination direction and color.
In International Tyrrhenian Workshop on Digital Communication (pp.
95-108). Springer, Cham.
• Riess, C., Pfaller, S., & Angelopoulou, E. (2015, September).
Reflectance normalization in illumination-based image manipulation
detection. In International Conference on Image Analysis and
Processing (pp. 3-10). Springer, Cham.
• Sun, R., & Zeng, W. (2014). Secure and robust image hashing via
compressive sensing. Multimedia tools and applications, 70(3), 1651-
1665.
75
76. Image Fidelity in
Research
• What does “image fidelity” in your
area of research look like, and
why?
• How do you achieve the proper level
of image fidelity for professional
practice?
• Where are the risks of potentially
lapsing into or choosing
artificiality?
• What are the “best practices” to
avoid image manipulation?
76
77. Some Versioning Notes and Caveats
• Versioning Notes: A first draft of this was shared on SlideShare in early March
2021. Since then, I have reviewed some more literature and added more on-
ground complexity. I updated the early version by replacing it on SlideShare. The
one online will not be the final version because I am always updating up until the
moment of presentation at which point I lock in that slideshow. Behind the
scenes, I am still learning about the topic though.
• I understand the benefits of having a single source for an “authoritative” copy and will likely
update using the final copy used in the presentation.
• Caveats: This is a first run at the topic only and does not actually represent either
the full capabilities of the digital image editing software nor the complexities of
the academic research space in particular disciplines/domains nor the different
types of digital imagery used in research. This does not touch on deeper image
forensics capabilities either, which can be very sophisticated in-field for various
domains.
77
78. Presenter Information and Contact
• Dr. Shalin Hai-Jew
• Instructional Design / Research / Training
• Academic and Student Technology Services
• ITS
• Kansas State University
• shalin@ksu.edu
• 785-532-5262
• All contents including visuals are by the presenter except for the cited
published sources, which are credited.
• CHECK 2021 is the Conference on Higher Education Computing in
Kansas.
78