Oplægget blev holdt ved InfinIT-arrangementet "Temadag om Remote Usability Testing" afholdt den 17. januar 2012. Læs mere på http://www.infinit.dk/dk/hvad_kan_vi_goere_for_dig/viden/reportager/mere_tid_prioritering_og_penge_til_brugbarhedstests.htm
Making the introductory science lab accessible online apr 2012gregkp
Presentation to the 5th Annual Student Success Mid-Atlantic Regional Conference, Apr 2012; a look at the state and use of simulations and otehr approaches to making science lab content available online.
Oplægget blev holdt ved InfinIT-arrangementet "Usability- evaluering i softwareusvikling", der blev afholdt den 16. september 2010. Læs mere om arrangementet her: http://infinit.dk/dk/hvad_kan_vi_goere_for_dig/viden/reportager/usability-evaluering_paa_forkant_er_rigtig_god_business.htm
Introduction to the basic concept of git, including 1) working directory and stage(previously called cache or index), 2) Branch manipulation, and 3) Working with remote repositories.
Remote sensing involves obtaining information about objects through analysis of sensor data without physical contact. It uses electromagnetic radiation as an information carrier. Key elements include an energy source, sensors to record energy interactions with objects, and transmission/processing of sensor data. Platforms can be ground, airborne, or space-based. Remote sensing provides regional views over broad portions of the electromagnetic spectrum and geo-referenced digital data. Applications include weather forecasting, mapping, monitoring vegetation/soils in agriculture, assessing water resources, and disaster control.
Do you know the basics of Git but wonder what all the hype is about? Do you want the ultimate control over your Git history? This tutorial will walk you through the basics of committing changes before diving into the more advanced and "dangerous" Git commands.
Git is an open source, distributed version control system used to track many different projects. You can use it to manage anything from a personal notes directory to a multi-programmer project.
This tutorial provides a short walk through of basic git commands and the Git philosophy to project management. Then we’ll dive into an exploration of the more advanced and “dangerous” Git commands. Watch as we rewrite our repository history, track bugs down to a specific commit, and untangle commits into an LKML-worthy patchset.
1) The document describes experiments on visual search for musical performances and endoscopic videos using global and local features.
2) Two datasets were used - a musical performances dataset with 473 videos and an endoscopic videos dataset containing anonymized recordings from medical procedures.
3) Experiments evaluated different feature fusion methods for global features and the SIMPLE local feature descriptor, and involved both quantitative and qualitative testing on the datasets.
Pain Points for Novice Programmers of Ambient Intelligence Systems: an Explor...Juan Pablo Sáenz
Presentation of the paper "Pain Points for Novice Programmers of Ambient Intelligence Systems: an Exploratory Study" at the IEEE 41st Annual Computer Software and Applications Conference (COMPSAC 2017) in Torino, Italy on July 4-8, 2017
The document discusses usability testing and inquiry methods. It provides details on conducting a usability test of the OpenSMSDroid Android app, including testing configurations, representative tasks, and data collection and analysis. It also summarizes findings from a usability study of various websites reported in the book Prioritizing Web Usability.
Making the introductory science lab accessible online apr 2012gregkp
Presentation to the 5th Annual Student Success Mid-Atlantic Regional Conference, Apr 2012; a look at the state and use of simulations and otehr approaches to making science lab content available online.
Oplægget blev holdt ved InfinIT-arrangementet "Usability- evaluering i softwareusvikling", der blev afholdt den 16. september 2010. Læs mere om arrangementet her: http://infinit.dk/dk/hvad_kan_vi_goere_for_dig/viden/reportager/usability-evaluering_paa_forkant_er_rigtig_god_business.htm
Introduction to the basic concept of git, including 1) working directory and stage(previously called cache or index), 2) Branch manipulation, and 3) Working with remote repositories.
Remote sensing involves obtaining information about objects through analysis of sensor data without physical contact. It uses electromagnetic radiation as an information carrier. Key elements include an energy source, sensors to record energy interactions with objects, and transmission/processing of sensor data. Platforms can be ground, airborne, or space-based. Remote sensing provides regional views over broad portions of the electromagnetic spectrum and geo-referenced digital data. Applications include weather forecasting, mapping, monitoring vegetation/soils in agriculture, assessing water resources, and disaster control.
Do you know the basics of Git but wonder what all the hype is about? Do you want the ultimate control over your Git history? This tutorial will walk you through the basics of committing changes before diving into the more advanced and "dangerous" Git commands.
Git is an open source, distributed version control system used to track many different projects. You can use it to manage anything from a personal notes directory to a multi-programmer project.
This tutorial provides a short walk through of basic git commands and the Git philosophy to project management. Then we’ll dive into an exploration of the more advanced and “dangerous” Git commands. Watch as we rewrite our repository history, track bugs down to a specific commit, and untangle commits into an LKML-worthy patchset.
1) The document describes experiments on visual search for musical performances and endoscopic videos using global and local features.
2) Two datasets were used - a musical performances dataset with 473 videos and an endoscopic videos dataset containing anonymized recordings from medical procedures.
3) Experiments evaluated different feature fusion methods for global features and the SIMPLE local feature descriptor, and involved both quantitative and qualitative testing on the datasets.
Pain Points for Novice Programmers of Ambient Intelligence Systems: an Explor...Juan Pablo Sáenz
Presentation of the paper "Pain Points for Novice Programmers of Ambient Intelligence Systems: an Exploratory Study" at the IEEE 41st Annual Computer Software and Applications Conference (COMPSAC 2017) in Torino, Italy on July 4-8, 2017
The document discusses usability testing and inquiry methods. It provides details on conducting a usability test of the OpenSMSDroid Android app, including testing configurations, representative tasks, and data collection and analysis. It also summarizes findings from a usability study of various websites reported in the book Prioritizing Web Usability.
Nicole Karpinsky defended her thesis which examined how spatial memory load affects compliance with an imperfect automated decision aid during a visual detection task. The study found that while the aid improved performance, spatial memory load reduced compliance compared to a no load condition. Compliance was higher for a near-perfect aid than a more fallible aid. The results provide insights into how workload influences human-automation interaction, though have some differences from prior studies that examined the impact of distracted conditions on compliance. Addressing limitations and exploring attentional demands could help characterize automation use in attention-demanding environments.
This document analyzes differences in task descriptions used in remote think aloud usability tests. Two types of task descriptions are identified: task focused descriptions that provide step-by-step instructions, and scenario focused descriptions that place tasks in a simulated real-world context. A study is described that used different task description styles with participants from different countries, finding cultural differences affected task interpretation. Analyzing participant questions and performances showed task focused descriptions led to a rigid focus on steps while scenario focused descriptions helped understand the system's overall purpose but allowed for flexible interpretations across cultures.
WUD Slovakia 2015: Experiment v UX class / Prof. Ing. Mária Bieliková, PhD.suxask
This document summarizes research conducted using 20 eye-trackers to simultaneously track eye movements of multiple participants. Key points:
- 20 Tobii X2-60 eye trackers were used in one room to track eye movements of multiple participants at once.
- Two experiments were conducted with 50 participants each, collecting over 2.5GB of raw eye tracking data.
- Lessons learned included having assistants, conducting pilot studies with multiple participants, and ensuring careful calibration and instructions for group eye tracking sessions.
ICSE’14 Workshop Keynote Address: Emerging Trends in Software Metrics (WeTSOM’14).
Data about software projects is not stored in metrc1, metric2,…,
but is shared between them in some shared, underlying,shape.
Not every project has thesame underlying simple shape; many projects have different,
albeit simple, shapes.
We can exploit that shape, to great effect: for better local predictions; for transferring
lessons learned; for privacy-preserving data mining/
Workflow Reuse in Practice: A Study of Neuroimaging Pipeline Usersdgarijo
eScience 2014, Guarujá (Brasil). Abstract: Workflow reuse is a major benefit of workflow systems and shared workflow repositories, but there are barely any studies that quantify the degree of reuse of workflows or the practical barriers that may stand in the way of successful reuse. In our own work, we hypothesize that defining workflow fragments improves reuse, since end-to-end workflows may be very specific and only partially reusable by others. This paper reports on a study of the current use of workflows and workflow fragments in labs that use the LONI Pipeline, a popular workflow system used mainly for neuroimaging research that enables users to define and reuse workflow fragments. We present an overview of the benefits of workflows and workflow fragments reported by users in informal discussions. We also report on a survey of researchers in a lab that has the LONI Pipeline installed, asking them about their experiences with reuse of workflow fragments and the actual benefits they perceive. This leads to quantifiable indicators of the reuse of workflows and workflow fragments in practice. Finally, we discuss barriers to further adoption of workflow fragments and workflow reuse that motivate further work.
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 👇
Do you agree?
This paper discusses challenges in contextual task analysis and the need of tools that support analysts to collect such information in context. Specifically we argue that the analysis of collaborative and distributed tasks can be supported by ambulatory assessment tools. We illustrate how contextual task analysis can be supported by TEMPEST, a platform originally created for experience sampling and more generally, longitudinal ambulatory assessment studies. We present a case study that illustrates the extent to which this tool meets the needs of real-world task analysis, describing the gains in efficiency it can provide but also directions for the development of tool support for task analysis.
Usability evaluation methods (part 2) and performance metricsAndres Baravalle
This document provides an overview of usability evaluation methods and performance metrics. It discusses usability testing methods like usability testing, usability inspections, and usability inquiry. It also covers specific techniques like heuristic evaluations, cognitive walkthroughs, surveys, and contextual inquiry. The document then discusses different types of performance metrics that can be used to measure the user experience, including task success rates, levels of success, errors, efficiency, and learnability.
Software Engineering Research: Leading a Double-Agent Life.Lionel Briand
The document discusses testing of closed-loop controllers in automotive systems. It notes the increasing complexity of automotive software and challenges in model-in-the-loop (MIL), software-in-the-loop (SIL), and hardware-in-the-loop (HIL) testing. It presents an approach to generate test cases for continuous controllers at the MIL level by representing requirements as objective functions and using a search-based approach to find worst-case scenarios. Experimental results found significantly worse scenarios than their industry partner, allowing them to generate better stress tests for HIL. The approach addresses a problem largely ignored in MIL testing of continuous controllers.
This document summarizes a lecture on data preprocessing for machine learning. It discusses the importance of data preprocessing, including handling missing or noisy data. The major tasks covered are data cleaning, integration, transformation, and reduction. Specific techniques discussed include data normalization, discretization, concept hierarchy generation, and dimensionality reduction. The goal of preprocessing is to transform raw data into a cleaner format suitable for analysis by machine learning algorithms.
1. The document proposes a method for unsupervised visual domain adaptation using auxiliary information from the target domain. It uses partial least squares (PLS) to generate target domain subspaces incorporating subsidiary non-visual data like depth features.
2. Experiments on object recognition across the ImageNet and B3DO datasets show the method outperforms previous subspace-based approaches that only use visual information. Using auxiliary target domain data improves classification accuracy consistently.
3. The approach is an improvement as most prior work on unsupervised domain adaptation did not leverage non-visual cues available in the target domain. This opens possibilities to incorporate other multimedia signals from life logging systems.
A personal journey towards more reproducible networking researchOlivier Bonaventure
The document discusses reproducibility in networking research. It summarizes a study on the accessibility of software artifacts from papers presented at SIGCOMM, CoNEXT, and Hotnets conferences between 2013-2014. The study found that only a small portion had their software artifacts publicly available either through a URL in the paper or by contacting the authors. It provides recommendations to improve reproducibility, such as encouraging authors to release source code and data with their papers. The document also discusses challenges around handling and sharing privacy-sensitive network data.
Investigating teachers' understanding of IMS Learning Design: Yes they can!Michael Derntl
1. The document summarizes a study that investigated teachers' understanding of the IMS Learning Design specification after a brief introduction. 21 teachers from 10 European countries participated in workshops where they were introduced to IMS LD elements and used an authoring tool.
2. In the study, teachers were given a scenario and task to model using IMS LD. Their solutions were analyzed and scored against a prototype. On average, solutions had 78% conformity with the prototype.
3. The study found teachers were able to complete moderately complex design tasks with IMS LD after a short introduction. However, some elements like role-parts and conditional activities posed conceptual difficulties for some teachers.
Vision Based Analysis on Trajectories of Notes Representing Ideas Toward Work...Yuji Oyamada
Paper info: https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_2C5E504/_article/-char/ja/
Paper (pdf): https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_2C5E504/_pdf/-char/ja
DeepXplore: Automated Whitebox Testing of Deep Learning Systems Jeongwhan Choi
K. Pei, Y. Cao, J. Yang, and S. Jana, “DeepXplore: Automated Whitebox Testing of Deep Learning Systems,” SOSP 2017 - Proc. 26th ACM Symp. Oper. Syst. Princ., pp. 1–18, May 2017.
Elsevier‘s RDM Program: Habits of Effective Data and the Bourne UlitmatumAnita de Waard
Elsevier's RDM Program: Ten Habits of Highly Effective Data
The document outlines Elsevier's research data management (RDM) program and efforts to support the effective management of research data. It discusses a "Maslow hierarchy" with 10 aspects of highly effective research data from stored to integrated. It provides examples of Elsevier's RDM tools and services like Hivebench, Mendeley Data, and DataSearch that help support storing, sharing, citing, and discovering research data. It also discusses collaborative RDM efforts like Force11, Research Data Alliance, and Crossref as well as journal initiatives to improve reproducibility. The document concludes with a proposed partnership where an institution could pilot and provide feedback on Elsevier's
This document discusses the history and evolution of usability testing over the decades from the 1970s to present. In the 1970s, usability testing was in its infancy with designers believing users were like them and wouldn't need to be consulted. The first usability labs were built in this decade. In the 1980s, usability testing became more scientific and expensive, conducted primarily in labs. Formative testing methods were introduced. The 1990s saw a shift to cheaper discount usability methods with fewer users needed. New techniques like personas and first click testing emerged. Today, faster and remote testing methods allow for more agile development while capturing the essential user experience.
This document discusses challenges with hardware-near programming and proposes solutions like object-oriented design, test-driven development, and mocking hardware for testing in C. It provides examples of encapsulating hardware registers in C and writing tests that check register values and function outputs without the physical hardware. The document concludes that while setting up the tools is an initial investment, TDD is possible and helps create safe, maintainable low-level software.
More Related Content
Similar to Erfaringer med Remote Usability Testing af Jan Stage, AAU
Nicole Karpinsky defended her thesis which examined how spatial memory load affects compliance with an imperfect automated decision aid during a visual detection task. The study found that while the aid improved performance, spatial memory load reduced compliance compared to a no load condition. Compliance was higher for a near-perfect aid than a more fallible aid. The results provide insights into how workload influences human-automation interaction, though have some differences from prior studies that examined the impact of distracted conditions on compliance. Addressing limitations and exploring attentional demands could help characterize automation use in attention-demanding environments.
This document analyzes differences in task descriptions used in remote think aloud usability tests. Two types of task descriptions are identified: task focused descriptions that provide step-by-step instructions, and scenario focused descriptions that place tasks in a simulated real-world context. A study is described that used different task description styles with participants from different countries, finding cultural differences affected task interpretation. Analyzing participant questions and performances showed task focused descriptions led to a rigid focus on steps while scenario focused descriptions helped understand the system's overall purpose but allowed for flexible interpretations across cultures.
WUD Slovakia 2015: Experiment v UX class / Prof. Ing. Mária Bieliková, PhD.suxask
This document summarizes research conducted using 20 eye-trackers to simultaneously track eye movements of multiple participants. Key points:
- 20 Tobii X2-60 eye trackers were used in one room to track eye movements of multiple participants at once.
- Two experiments were conducted with 50 participants each, collecting over 2.5GB of raw eye tracking data.
- Lessons learned included having assistants, conducting pilot studies with multiple participants, and ensuring careful calibration and instructions for group eye tracking sessions.
ICSE’14 Workshop Keynote Address: Emerging Trends in Software Metrics (WeTSOM’14).
Data about software projects is not stored in metrc1, metric2,…,
but is shared between them in some shared, underlying,shape.
Not every project has thesame underlying simple shape; many projects have different,
albeit simple, shapes.
We can exploit that shape, to great effect: for better local predictions; for transferring
lessons learned; for privacy-preserving data mining/
Workflow Reuse in Practice: A Study of Neuroimaging Pipeline Usersdgarijo
eScience 2014, Guarujá (Brasil). Abstract: Workflow reuse is a major benefit of workflow systems and shared workflow repositories, but there are barely any studies that quantify the degree of reuse of workflows or the practical barriers that may stand in the way of successful reuse. In our own work, we hypothesize that defining workflow fragments improves reuse, since end-to-end workflows may be very specific and only partially reusable by others. This paper reports on a study of the current use of workflows and workflow fragments in labs that use the LONI Pipeline, a popular workflow system used mainly for neuroimaging research that enables users to define and reuse workflow fragments. We present an overview of the benefits of workflows and workflow fragments reported by users in informal discussions. We also report on a survey of researchers in a lab that has the LONI Pipeline installed, asking them about their experiences with reuse of workflow fragments and the actual benefits they perceive. This leads to quantifiable indicators of the reuse of workflows and workflow fragments in practice. Finally, we discuss barriers to further adoption of workflow fragments and workflow reuse that motivate further work.
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 👇
Do you agree?
This paper discusses challenges in contextual task analysis and the need of tools that support analysts to collect such information in context. Specifically we argue that the analysis of collaborative and distributed tasks can be supported by ambulatory assessment tools. We illustrate how contextual task analysis can be supported by TEMPEST, a platform originally created for experience sampling and more generally, longitudinal ambulatory assessment studies. We present a case study that illustrates the extent to which this tool meets the needs of real-world task analysis, describing the gains in efficiency it can provide but also directions for the development of tool support for task analysis.
Usability evaluation methods (part 2) and performance metricsAndres Baravalle
This document provides an overview of usability evaluation methods and performance metrics. It discusses usability testing methods like usability testing, usability inspections, and usability inquiry. It also covers specific techniques like heuristic evaluations, cognitive walkthroughs, surveys, and contextual inquiry. The document then discusses different types of performance metrics that can be used to measure the user experience, including task success rates, levels of success, errors, efficiency, and learnability.
Software Engineering Research: Leading a Double-Agent Life.Lionel Briand
The document discusses testing of closed-loop controllers in automotive systems. It notes the increasing complexity of automotive software and challenges in model-in-the-loop (MIL), software-in-the-loop (SIL), and hardware-in-the-loop (HIL) testing. It presents an approach to generate test cases for continuous controllers at the MIL level by representing requirements as objective functions and using a search-based approach to find worst-case scenarios. Experimental results found significantly worse scenarios than their industry partner, allowing them to generate better stress tests for HIL. The approach addresses a problem largely ignored in MIL testing of continuous controllers.
This document summarizes a lecture on data preprocessing for machine learning. It discusses the importance of data preprocessing, including handling missing or noisy data. The major tasks covered are data cleaning, integration, transformation, and reduction. Specific techniques discussed include data normalization, discretization, concept hierarchy generation, and dimensionality reduction. The goal of preprocessing is to transform raw data into a cleaner format suitable for analysis by machine learning algorithms.
1. The document proposes a method for unsupervised visual domain adaptation using auxiliary information from the target domain. It uses partial least squares (PLS) to generate target domain subspaces incorporating subsidiary non-visual data like depth features.
2. Experiments on object recognition across the ImageNet and B3DO datasets show the method outperforms previous subspace-based approaches that only use visual information. Using auxiliary target domain data improves classification accuracy consistently.
3. The approach is an improvement as most prior work on unsupervised domain adaptation did not leverage non-visual cues available in the target domain. This opens possibilities to incorporate other multimedia signals from life logging systems.
A personal journey towards more reproducible networking researchOlivier Bonaventure
The document discusses reproducibility in networking research. It summarizes a study on the accessibility of software artifacts from papers presented at SIGCOMM, CoNEXT, and Hotnets conferences between 2013-2014. The study found that only a small portion had their software artifacts publicly available either through a URL in the paper or by contacting the authors. It provides recommendations to improve reproducibility, such as encouraging authors to release source code and data with their papers. The document also discusses challenges around handling and sharing privacy-sensitive network data.
Investigating teachers' understanding of IMS Learning Design: Yes they can!Michael Derntl
1. The document summarizes a study that investigated teachers' understanding of the IMS Learning Design specification after a brief introduction. 21 teachers from 10 European countries participated in workshops where they were introduced to IMS LD elements and used an authoring tool.
2. In the study, teachers were given a scenario and task to model using IMS LD. Their solutions were analyzed and scored against a prototype. On average, solutions had 78% conformity with the prototype.
3. The study found teachers were able to complete moderately complex design tasks with IMS LD after a short introduction. However, some elements like role-parts and conditional activities posed conceptual difficulties for some teachers.
Vision Based Analysis on Trajectories of Notes Representing Ideas Toward Work...Yuji Oyamada
Paper info: https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_2C5E504/_article/-char/ja/
Paper (pdf): https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_2C5E504/_pdf/-char/ja
DeepXplore: Automated Whitebox Testing of Deep Learning Systems Jeongwhan Choi
K. Pei, Y. Cao, J. Yang, and S. Jana, “DeepXplore: Automated Whitebox Testing of Deep Learning Systems,” SOSP 2017 - Proc. 26th ACM Symp. Oper. Syst. Princ., pp. 1–18, May 2017.
Elsevier‘s RDM Program: Habits of Effective Data and the Bourne UlitmatumAnita de Waard
Elsevier's RDM Program: Ten Habits of Highly Effective Data
The document outlines Elsevier's research data management (RDM) program and efforts to support the effective management of research data. It discusses a "Maslow hierarchy" with 10 aspects of highly effective research data from stored to integrated. It provides examples of Elsevier's RDM tools and services like Hivebench, Mendeley Data, and DataSearch that help support storing, sharing, citing, and discovering research data. It also discusses collaborative RDM efforts like Force11, Research Data Alliance, and Crossref as well as journal initiatives to improve reproducibility. The document concludes with a proposed partnership where an institution could pilot and provide feedback on Elsevier's
This document discusses the history and evolution of usability testing over the decades from the 1970s to present. In the 1970s, usability testing was in its infancy with designers believing users were like them and wouldn't need to be consulted. The first usability labs were built in this decade. In the 1980s, usability testing became more scientific and expensive, conducted primarily in labs. Formative testing methods were introduced. The 1990s saw a shift to cheaper discount usability methods with fewer users needed. New techniques like personas and first click testing emerged. Today, faster and remote testing methods allow for more agile development while capturing the essential user experience.
This document discusses challenges with hardware-near programming and proposes solutions like object-oriented design, test-driven development, and mocking hardware for testing in C. It provides examples of encapsulating hardware registers in C and writing tests that check register values and function outputs without the physical hardware. The document concludes that while setting up the tools is an initial investment, TDD is possible and helps create safe, maintainable low-level software.
This document summarizes an embedded software project that used object-oriented modeling and design with UML, along with Safety-Critical Java and C programming. A team of students created a model car that could be remotely controlled via an app. The project followed an object-oriented development process, including use case modeling, component diagrams, and testing of components using mock objects. The design included a layered architecture with hardware abstraction and platform abstraction layers. Missions in Safety-Critical Java were used to model different car modes like Park and Drive. Unit testing of components and testing on the execution platform helped evaluate memory usage and schedulability. The document concludes that this approach helped manage complexity in the embedded system.
The document summarizes a company's conversion of its embedded controller software from C to C++ over a two month period. It involved converting 8 projects with 30% shared code across 18 developers. Challenges included converting callbacks and dealing with scripting errors. Opportunities included improving code quality, team building, and evaluating new static analysis tools. The conversion was successful with minimal performance impacts and many bugs were found and fixed during the process. Future plans include C++ training and refactoring code to fully utilize C++ features.
This document discusses embedded Linux development from a manager's perspective. It provides the speaker's background working with C and C++ on embedded systems. Key expectations of programming languages for embedded systems are outlined, including flexibility, low cost, and real-time performance. The document discusses why C is commonly used for embedded development and outlines best practices like code reviews when using C to avoid issues. It also discusses moving to C++ and using Linux for embedded projects.
The document discusses the C programming language. It provides some key facts about C:
- C was developed in the late 1960s and early 1970s by Dennis Ritchie at Bell Labs.
- C became popular due to its use in developing the UNIX operating system.
- The IT world widely uses C, as evidenced by its use in operating systems like Linux, Windows, and iOS.
- The C language has undergone standardization with standards published in 1989 (C89), 1999 (C99), 2011 (C11), and 2018 (C18).
- C influenced many other popular programming languages and remains one of the most widely used languages today.
The document discusses the evolution of industrial revolutions and key elements of Industry 4.0, including intelligent automation and production facilities, smart products, virtual production, and more. It also examines the increasing need for systems engineering as products and production become more complex. Finally, it outlines six key fields that must be mastered for successful digital transformation: usage, data, technology, process, role, and culture.
Emergent synthetic processes (ESP) is a new paradigm for implementing process changes without needing agreement from all participants. It works by having organizational members define service descriptions stating what tasks they are willing to do and under what conditions. Processes are then synthesized in real-time from these service descriptions for each specific case, finding the optimal route through the organization. This allows service descriptions and partially completed processes to be updated at any time without requiring agreement. ESP enables a more flexible and distributed approach to processes and workflow.
This document discusses the integration of DCR (Dynamic Case Resolution) with the KMD Workzone case management platform to enable more automated and adaptive case resolution. It envisions using technologies like machine learning, artificial intelligence, and automation to handle more routine case activities while still allowing for human judgment and deviations from standard workflows. The approach is described as evolutionary rather than revolutionary, breaking large changes into smaller, configurable steps and getting users involved to identify automatable activities and ensure the system meets their needs. Demostrations are provided of Workzone's flexible configuration capabilities and how DCR could be integrated to iteratively introduce more automated case resolution over time.
SupWiz is a spin-off from world-leading AI experts that develops omni-channel AI software to disrupt customer service and support. Their platform makes different customer service channels intelligent and links them together using techniques like intelligent virtual agents, knowledge management, and analytics. The platform integrates with infrastructure components and has been proven valuable at several customers, accurately answering questions and reducing response times. SupWiz aims to improve the customer experience throughout the entire journey with AI-powered solutions.
The document discusses NNIT's vision for its Service Support Center to improve user productivity through reducing demand for support. Key points include:
- Integrating all user interaction data across systems to create a single source of truth data warehouse for metrics and reporting.
- Implementing configuration management policies, SLA policies, and integrating different levels of knowledge and problem management to reduce support demand and minimize downtime.
- The goal is machine-learning enabled intelligent automation that is flexible, consistent and cost-efficient to provide support across channels like phone, chat, and with multi-language translation available 24/7 globally.
- Statistics are presented on ticket routing optimization using AI to reduce unnecessary ticket jumps between support agents.
This document discusses how natural language processing (NLP) can be used for customer support. It outlines several NLP applications for customer support like search, fraud detection, and translation. It also discusses how NLP can help answer previously unasked questions by generating questions from knowledge bases and documents. Finally, it proposes a "customer support Turing test" to evaluate NLP systems for their ability to fool classifiers that distinguish customer support agents from customers.
This document provides information about an AI conference on the future of customer service. The conference will feature presentations from leaders in various AI and data organizations, as well as a panel debate. Statistics are presented showing the growing importance and impact of AI and chatbots on customer service interactions and cost savings over the coming years. The AMAOS project from the University of Copenhagen is also introduced, which focuses on advanced machine learning for automated omni-channel customer support.
The document discusses a project aimed at improving quality of life for citizens with affective disorders like depression. It outlines a vision called "Psyche" that aims to anticipate and alleviate acute depression through a digital platform. A configuration table presents the rationale, strategy, and tactics for a prospect to realize this vision, including leveraging the user's digital diary and questionnaire responses to detect emerging depressive episodes and provide alleviation measures. The table identifies challenges like ineffective intervention and underused platform potential, noting that anticipation works but could be improved and alleviation measures are sometimes weak or misplaced.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Erfaringer med Remote Usability Testing af Jan Stage, AAU
1. Erfaringer med Remote Usability Testing?
Jan Stage
Professor, PhD
Forskningsleder i Informationssystemer (IS)/Human-Computer Interaction (HCI)
Aalborg Universitet, Institut for Datalogi, HCI-Lab
jans@cs.aau.dk
3. Oversigt
• Undersøgelse 1: synkron eller asynkron
• Metode
• Resultater
• Konklusion
• Undersøgelse 2
Institut for Datalogi 3
4. Empirical Study 1
Four methods: LAB – RS – AE – AU
Test subjects: 6 in each condition (18 users and 6 with usability expertise), all
students at Aalborg University
System: Email client (Mozilla Thunderbird 1.5)
9 defined tasks (typical email functions)
Setting, procedure and data collection in accordance with method
Data analysis: 24 outputs were analysed by three persons in random and
different order
Generated their individual lists of usability problems with their own
categorizations (also for the AE and AU conditions)
These were merged into an overall problem list through negotiation
Institut for Datalogi 4
5. Results: Task Completion
No significant difference in task completion
Significant difference in task completion
time
The users in the two asynchronous
conditions spent considerably more
time
We do not know the reason
Institut for Datalogi 5
6. Results: Usability Problems Identified
A total of 46 usability problems
No significant difference between LAB and RS
AE/AU identified significantly fewer problems, also critical problems
No significant difference between AE and AU in terms of problems identified
Institut for Datalogi 6
7. Conclusion
RS is the most widely described and used remote method. The performance
is virtually equivalent to LAB (or slightly better)
AE and AU perform surprisingly well
Experts do not perform significantly better than users
Video analysis (LAB and RS) required considerably more evaluator effort than
the user-based reporting (AU and AE)
Users can actually contribute to usability evaluation – not with the same
quality, but reasonably well, and there are plenty of them
Institut for Datalogi 7
8. Oversigt
• Undersøgelse 1
• Undersøgelse 2: hvilken asynkron metode
• Metode
• Resultater
• Konklusion
Institut for Datalogi 8
9. Empirical Study 2
Purpose: examine and compare remote asynchronous methods
Focus on usability problems identified
Comparable with the previous study
Selection of asynchronous methods based on literature survey
Institut for Datalogi 9
10. The 3 Remote Asynchronous Methods
User-reported critical incident (UCI)
• Well-defined method (Castillo et al. CHI 1998)
Forum-based online reporting and discussion (Forum)
• Assumption: through collaboration participants may give input which increases data
quality and richness (Thompson, 1999)
• A source for collecting qualitative data in a study of auto logging (Millen, 1999): the
participants turned out to report detailed usability feedback
Diary-based longitudinal user reporting (Diary)
• Used on a longitudinal basis for participants in a study of auto logging to provide
qualitative information (Steves et al. CSCW 2001)
• First day: same tasks as the other conditions (first part of diary delivered)
• Four more days: new tasks (same type) sent daily (complete diary delivered)
Conventional user-based laboratory test (Lab)
• Included as benchmark
Institut for Datalogi 10
11. Empirical Study (1)
Participants:
• 40 test subjects, 10 for each condition
• Students, age 20 to 30
• Distributed evenly: gender and tech/non-tech education
Setting:
• LAB: in our usability lab
• Remote asynchronous: in the participants’ homes
Participants in the remote asynchronous conditions received the software and
installed it on their computer
Training material for the remote asynchronous conditions
• Identification and categorisation of usability problems
• A minimalist approach that was strictly remote and asynchronous (via email)
Institut for Datalogi 11
12. Empirical Study (2)
Tasks:
• Nine fixed tasks
• The same across the four conditions to ensure that all participants used the
same parts of the system
• Typical email tasks (same as previous study)
Data collection in accordance with the method
• LAB: video recordings
• UCI: web-based system for generating problem descriptions while solving tasks
• Forum: after solving tasks, one week for posting and discussing problems
• Diary: a diary with no imposed structure; first part after the first day
Institut for Datalogi 12
13. Data Analysis
All data collected before the data analysis started
3 evaluators did the whole data analysis
The 40 data sets were analysed by the 3 evaluators
• In random order: by a draw
• In different order between them
The user input from the three remote conditions was transformed into
usability problem descriptions
Each evaluator generated his/her own individual lists of usability problems with
their own severity ratings
• A problem list for each condition
• A complete problem list (joined)
These were merged into an overall problem list through negotiation
Institut for Datalogi 13
14. Results: Task Completion Time
Considerable variation in task completion times
Participants in the remote conditions worked in their home at a time they
selected
For each task there was a hint that allowed them to check if they had solved
the task correctly
As we have no data on the task solving process in the remote conditions, we
cannot explain this variation
Institut for Datalogi 14
15. Results: Usability Problems Identified
Lab UCI Forum Diary
N=10 N=10 N=10 N=10
Task completion time in Tasks 1-9:
minutes: Average (SD) 24.24 (6.3) 34.45 (14.33) 15.45 (5.83) 32.57 (28.34)
Usability problems: # % # % # % # %
Critical (21) 20 95 10 48 9 43 11 52
Serious (17) 14 82 2 12 1 6 6 35
Cosmetic (24) 12 50 1 4 5 21 12 50
Total (62) 46 74 13 21 15 24 29 47
LAB: significantly better than the 3 remote conditions
UCI-Forum: no significant difference
UCI-Diary: significant overall: Diary – also significant on cosmetic
Forum-Diary: significant overall: Diary – not significant on any level
Institut for Datalogi 15
16. Results: Evaluator Effort
Lab UCI Forum Diary
(46) (13) (15) (29)
Preparation 6:00 2:40 2:40 2:40
Conducting test 10:00 1:00 1:00 1:30
Analysis 33:18 2:52 3:56 9:38
Merging problem lists 11:45 1:41 1:42 4:58
Total time spent 61:03 8:13 9:18 18:46
Avg. time per problem 1:20 0:38 0:37 0:39
The sum for all evaluators involved in each activity
Time for finding test subjects is not included (8h, common for all)
Task specifications from an earlier study. Preparation in the remote
conditions: work out written instructions
Considerable differences between the remote conditions for analysis and
merging of problem lists
Institut for Datalogi 16
17. Conclusion
The three remote methods performed significantly below the classical lab test
in terms of the number of usability problems identified
The Diary was the best remote method – it identified half of the problems
found in the Lab condition
UCI and Forum performed similarly for critical problems but worse for
serious problems
UCI and Forum took 13% of the lab test. Diary took 30%
The productivity of the remote methods was considerably higher
Institut for Datalogi 17
19. Interaktionsdesign og usability-evaluering
Master i IT
Videreuddannelse under IT-Vest
Fagpakke i Interaktionsdesign og usability-evaluering starter 1/2-12
Optager bachelorer, men også indgang for datamatikere
Information: http://www.master-it-vest.dk/
Institut for Datalogi 19