The GReAT project aimed to develop an affordable computer-based gesture therapy called Gest to help people with severe aphasia practice gestures at home. Through participatory design sessions with people with aphasia, they created a prototype for testing in a pilot study. Preliminary results found Gest was successfully used independently in diverse home settings and enjoyed by users. However, the pilot study was still needed to determine if Gest improved gesture production or naming abilities and if effects were maintained after use.
The document discusses labor management systems (LMS) and how they can save time and money for organizations. It describes who needs LMS, such as front office staff, managers, and hourly employees. An LMS is defined as the integration of hardware, software, and user decisions to efficiently allocate human resources. LMS provide features like easy to use scheduling, timekeeping integration, and real-time access to key performance indicators. The document argues LMS can eliminate fraud and save over 6% of payroll costs through controls on overtime and unauthorized work hours.
This document provides background information on Nathaniel Hawthorne, the author of The Scarlet Letter, and the historical context of Puritan Boston in the 1640s, which served as the setting for the novel. It discusses Hawthorne's life, beliefs, and inspiration for writing about themes of sin, guilt, and their effects on individuals. An overview of the basic plot and main characters is given. Key scenes, symbols, and the structure of the novel are also summarized.
The document discusses various sheet metal forming processes including shearing, bending, spinning, deep drawing, hydroforming, and explosive forming. It provides images and explanations of different bending operations and machines used for sheet metal forming like press brakes. The document serves as a review of the many processes used to form sheet metal parts.
Venecia es conocida por sus canales, góndolas y arquitectura única. La ciudad italiana se caracteriza por su Gran Canal, el Palazzo Ducale junto al mar, el famoso Carnaval y la Plaza de San Marcos, así como la ópera La Fenice.
The document discusses labor management systems (LMS) and how they can save time and money for organizations. It describes who needs LMS, such as front office staff, managers, and hourly employees. An LMS is defined as the integration of hardware, software, and user decisions to efficiently allocate human resources. LMS provide features like easy to use scheduling, timekeeping integration, and real-time access to key performance indicators. The document argues LMS can eliminate fraud and save over 6% of payroll costs through controls on overtime and unauthorized work hours.
This document provides background information on Nathaniel Hawthorne, the author of The Scarlet Letter, and the historical context of Puritan Boston in the 1640s, which served as the setting for the novel. It discusses Hawthorne's life, beliefs, and inspiration for writing about themes of sin, guilt, and their effects on individuals. An overview of the basic plot and main characters is given. Key scenes, symbols, and the structure of the novel are also summarized.
The document discusses various sheet metal forming processes including shearing, bending, spinning, deep drawing, hydroforming, and explosive forming. It provides images and explanations of different bending operations and machines used for sheet metal forming like press brakes. The document serves as a review of the many processes used to form sheet metal parts.
Venecia es conocida por sus canales, góndolas y arquitectura única. La ciudad italiana se caracteriza por su Gran Canal, el Palazzo Ducale junto al mar, el famoso Carnaval y la Plaza de San Marcos, así como la ópera La Fenice.
Cross Team Testing presentation at DevLin2013Johan Åtting
Cross Team Testing is a way to tackle bias. Having testers in the development teams have many benefits but also new challenges. One of these challenges are that the testers get biased. Cross Team Testing is a structured way to tackle this bias. This is my presentation on this subject from DevLin March 14 2013.
The document discusses usability testing and provides guidance on how to conduct effective tests. It recommends testing throughout development, observing users rather than asking for feedback, and focusing on metrics like performance, errors, recall, engagement, and emotional response. It outlines planning tests, individual sessions, the observer's role, and producing a report. Key steps include recruiting appropriate participants, defining tasks, reviewing after, and using tools to capture user behavior.
The document provides guidance on end user testing, including its purpose and key principles. It discusses testing roles like moderator, note taker, and observers. It outlines the testing process from briefing to task-based testing. Key things to watch for include leading questions, asking design questions, feeling too opinionated, and using technical terms. Overall the document aims to help make user testing insightful for improving product design and usability.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This document provides class notes from an empirical research methods course. It covers topics related to usability testing including different types of usability experiments, planning and executing a usability experiment, collecting and analyzing usability data, and testing usability in the field. Examples of specific topics discussed include within-subjects and between-subjects experimental designs, types of data to collect during usability testing, qualitative and quantitative analysis methods, and ethical considerations when conducting experiments with human subjects.
Exploratory Testing with the Team outlines a journey of implementing exploratory testing (ET) sessions with a development team. The document describes the problems that only testers were doing testing and developers did not see it as their responsibility. It then details the six steps to set up ET sessions: 1) educate others on ET basics, 2) plan sessions with charters, 3) prepare test systems, 4) conduct time-boxed testing sessions, 5) debrief on findings, and 6) celebrate successes. Initially sessions were successful with many defects found. However, attendance declined over time as developers made excuses to not participate. The document advocates not giving up and keeping the spirit of ET by making it a regular part of development through
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Not only that, “standards bodies”, such as CMMi, require metrics to achieve a certain maturity level. These two factors tend to drive organizations to blindly adopt a set of metrics as a way of satisfying some process transparency requirement. Rarely do any organizations apply any statistical or scientific thought behind the measures and metrics they establish and interpret. In this talk, we’ll look at some common metrics and why they fail to represent what most believe they do. We’ll discuss the real purpose of metrics, issues with metric programs, how to leverage metrics effectively, and finally specific measure and metric pitfalls organizations encounter.
About Joseph Ours' Presentation – “Bad Metric – Bad!”
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Organizations blindly adopt a set of metrics as a way of satisfying some process transparency requirement, rarely applying any statistical or scientific thought behind the measures and metrics they establish and interpret. Many metrics do not represent what people believe they do and as a result can lead to erroneous decisions. Joseph looks at some of the common and some of the humorous testing metrics and determines why they are failures. He further discusses the real purpose of metrics, metrics programs and finishes with pitfalls into which you fall.
Exploratory testing in an agile development organization (it quality & test ...Johan Åtting
A case about how a company (Sectra) is using Exploratory Testing in their agile development organization where testers and developers are sitting together in cross functional teams using Scrum.
The document discusses various topics related to usability testing, including:
1. An agenda for a usability technical workshop that covers topics like UX testing, usability vs UX, usability metrics, test design, recruitment, running tests, and data analysis.
2. Guidelines for test design that include defining metrics, success rates, tasks, and subject profiles.
3. Methods for measuring usability like success rates, time on task, error rates, and satisfaction.
4. Best practices for running usability tests like making participants comfortable, remaining neutral, taking detailed notes, and measuring both performance and subjective feedback.
If you’ve never requested a usability study bid before or you want to see how our process differs from others you have worked with in the past — this deck is for you. Here is the 5-step process June UX uses to plan and conduct moderated usability studies.
These slides provide an introduction to usability testing. This well-known method in user-centred design is used to improve products, by having participants interact with these products and by measuring their performances and responses.
I presented this topic as a guest lecturer to first-year Psychology students at the University of Twente at February 6th, 2017. Providing examples and best practices from Dutch digital design agency Mirabeau, I explained to them the required steps for the preparation, the moderation, and the analysis of usability tests. Moreover, I highlighted the importance of psychologists’ knowledge, (research) methods and skills for design, which I believe to be invaluable.
Evaluation techniques can be used at all stages of the design process to test interfaces and identify problems. There are two main categories of evaluation: expert analysis and user participation. Expert analysis includes cognitive walkthroughs, heuristic evaluations, and review-based evaluations. User participation evaluations involve testing with users and can be done in laboratories, fields studies, or experiments. A variety of techniques exist within each category to gather both qualitative and quantitative feedback. Choosing an evaluation method depends on factors like the design process stage, desired objectivity, and available resources.
These slides cover various aspects of testing in the context of computer science and human-computer interaction (HCI). It includes topics like evaluation techniques, empirical methods in HCI, controlled experiments, and various testing approaches such as A/B testing, cognitive walkthrough, heuristic evaluation, and review-based evaluation. Additionally, it touches on issues related to internal and external validity, reliability in testing, and different experimental designs like between-subjects and within-subjects experiments. This material appears to be educational and is likely used in a course related to computer science or HCI.
All content within this presentation is the property of Royal Holloway, University of London. Unauthorized use, duplication, or distribution of the materials contained herein is strictly prohibited.
Liberating Structures 2 with blended f2f/online participation at #sfaddisEuforic Services
Slides used to support an experimental session at the May 2015 AgKnowledge Innovation Process ShareFair in Addis Ababa. We were introducing some examples of LiberatingStructures methods and testing out different options for remote participation
This document discusses various methods for evaluating the usability of systems, including both analytic methods conducted by experts and empirical methods involving observations of and surveys with users. Empirical evaluations aim to draw valid conclusions about real-world usage but can be challenging due to issues with the representativeness of test users, the realism of test contexts and tasks, and whether collected data truly reflects real impacts. Field studies observe users in realistic contexts but are time-consuming, while lab studies allow more control but also reduce realism. Interviews rely on subjective user memory and perspective. Statistics like t-tests and ANOVAs can be used to analyze empirical data and determine statistical significance.
You can easily understand Evaluation Techniques in HCI from this ppt.
Hope you understand in easy way by thoroughly reading this material.
For clear understanding I also give examples of each and every concept.
If you get any knowledge or understanding from this material then, Kindly share further wit your family members and friends, and don't forget to give likes to this material THANKS.
Ian Franklin from IdeaSmiths discussing fitting Usability Labs into Agile sprints.
Traditionally, usability labs took a long time to organise; often just a usability bug hunt and resulted in a lengthy report of recommendations that no one read and took weeks to produce.
This talk covers how to adapt the usability lab to include discovery and co-creation, yet still record results rigorously while completing analysis and reporting within a couple of days.
It also covers how to counter the common objections to user feedback (“its only 5 users”, “it’s just anecdotes”) and how to use the lab to get stakeholders on side.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
More Related Content
Similar to GReAT Aphasia Technology Event January 2012
Cross Team Testing presentation at DevLin2013Johan Åtting
Cross Team Testing is a way to tackle bias. Having testers in the development teams have many benefits but also new challenges. One of these challenges are that the testers get biased. Cross Team Testing is a structured way to tackle this bias. This is my presentation on this subject from DevLin March 14 2013.
The document discusses usability testing and provides guidance on how to conduct effective tests. It recommends testing throughout development, observing users rather than asking for feedback, and focusing on metrics like performance, errors, recall, engagement, and emotional response. It outlines planning tests, individual sessions, the observer's role, and producing a report. Key steps include recruiting appropriate participants, defining tasks, reviewing after, and using tools to capture user behavior.
The document provides guidance on end user testing, including its purpose and key principles. It discusses testing roles like moderator, note taker, and observers. It outlines the testing process from briefing to task-based testing. Key things to watch for include leading questions, asking design questions, feeling too opinionated, and using technical terms. Overall the document aims to help make user testing insightful for improving product design and usability.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This document provides class notes from an empirical research methods course. It covers topics related to usability testing including different types of usability experiments, planning and executing a usability experiment, collecting and analyzing usability data, and testing usability in the field. Examples of specific topics discussed include within-subjects and between-subjects experimental designs, types of data to collect during usability testing, qualitative and quantitative analysis methods, and ethical considerations when conducting experiments with human subjects.
Exploratory Testing with the Team outlines a journey of implementing exploratory testing (ET) sessions with a development team. The document describes the problems that only testers were doing testing and developers did not see it as their responsibility. It then details the six steps to set up ET sessions: 1) educate others on ET basics, 2) plan sessions with charters, 3) prepare test systems, 4) conduct time-boxed testing sessions, 5) debrief on findings, and 6) celebrate successes. Initially sessions were successful with many defects found. However, attendance declined over time as developers made excuses to not participate. The document advocates not giving up and keeping the spirit of ET by making it a regular part of development through
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Not only that, “standards bodies”, such as CMMi, require metrics to achieve a certain maturity level. These two factors tend to drive organizations to blindly adopt a set of metrics as a way of satisfying some process transparency requirement. Rarely do any organizations apply any statistical or scientific thought behind the measures and metrics they establish and interpret. In this talk, we’ll look at some common metrics and why they fail to represent what most believe they do. We’ll discuss the real purpose of metrics, issues with metric programs, how to leverage metrics effectively, and finally specific measure and metric pitfalls organizations encounter.
About Joseph Ours' Presentation – “Bad Metric – Bad!”
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Organizations blindly adopt a set of metrics as a way of satisfying some process transparency requirement, rarely applying any statistical or scientific thought behind the measures and metrics they establish and interpret. Many metrics do not represent what people believe they do and as a result can lead to erroneous decisions. Joseph looks at some of the common and some of the humorous testing metrics and determines why they are failures. He further discusses the real purpose of metrics, metrics programs and finishes with pitfalls into which you fall.
Exploratory testing in an agile development organization (it quality & test ...Johan Åtting
A case about how a company (Sectra) is using Exploratory Testing in their agile development organization where testers and developers are sitting together in cross functional teams using Scrum.
The document discusses various topics related to usability testing, including:
1. An agenda for a usability technical workshop that covers topics like UX testing, usability vs UX, usability metrics, test design, recruitment, running tests, and data analysis.
2. Guidelines for test design that include defining metrics, success rates, tasks, and subject profiles.
3. Methods for measuring usability like success rates, time on task, error rates, and satisfaction.
4. Best practices for running usability tests like making participants comfortable, remaining neutral, taking detailed notes, and measuring both performance and subjective feedback.
If you’ve never requested a usability study bid before or you want to see how our process differs from others you have worked with in the past — this deck is for you. Here is the 5-step process June UX uses to plan and conduct moderated usability studies.
These slides provide an introduction to usability testing. This well-known method in user-centred design is used to improve products, by having participants interact with these products and by measuring their performances and responses.
I presented this topic as a guest lecturer to first-year Psychology students at the University of Twente at February 6th, 2017. Providing examples and best practices from Dutch digital design agency Mirabeau, I explained to them the required steps for the preparation, the moderation, and the analysis of usability tests. Moreover, I highlighted the importance of psychologists’ knowledge, (research) methods and skills for design, which I believe to be invaluable.
Evaluation techniques can be used at all stages of the design process to test interfaces and identify problems. There are two main categories of evaluation: expert analysis and user participation. Expert analysis includes cognitive walkthroughs, heuristic evaluations, and review-based evaluations. User participation evaluations involve testing with users and can be done in laboratories, fields studies, or experiments. A variety of techniques exist within each category to gather both qualitative and quantitative feedback. Choosing an evaluation method depends on factors like the design process stage, desired objectivity, and available resources.
These slides cover various aspects of testing in the context of computer science and human-computer interaction (HCI). It includes topics like evaluation techniques, empirical methods in HCI, controlled experiments, and various testing approaches such as A/B testing, cognitive walkthrough, heuristic evaluation, and review-based evaluation. Additionally, it touches on issues related to internal and external validity, reliability in testing, and different experimental designs like between-subjects and within-subjects experiments. This material appears to be educational and is likely used in a course related to computer science or HCI.
All content within this presentation is the property of Royal Holloway, University of London. Unauthorized use, duplication, or distribution of the materials contained herein is strictly prohibited.
Liberating Structures 2 with blended f2f/online participation at #sfaddisEuforic Services
Slides used to support an experimental session at the May 2015 AgKnowledge Innovation Process ShareFair in Addis Ababa. We were introducing some examples of LiberatingStructures methods and testing out different options for remote participation
This document discusses various methods for evaluating the usability of systems, including both analytic methods conducted by experts and empirical methods involving observations of and surveys with users. Empirical evaluations aim to draw valid conclusions about real-world usage but can be challenging due to issues with the representativeness of test users, the realism of test contexts and tasks, and whether collected data truly reflects real impacts. Field studies observe users in realistic contexts but are time-consuming, while lab studies allow more control but also reduce realism. Interviews rely on subjective user memory and perspective. Statistics like t-tests and ANOVAs can be used to analyze empirical data and determine statistical significance.
You can easily understand Evaluation Techniques in HCI from this ppt.
Hope you understand in easy way by thoroughly reading this material.
For clear understanding I also give examples of each and every concept.
If you get any knowledge or understanding from this material then, Kindly share further wit your family members and friends, and don't forget to give likes to this material THANKS.
Ian Franklin from IdeaSmiths discussing fitting Usability Labs into Agile sprints.
Traditionally, usability labs took a long time to organise; often just a usability bug hunt and resulted in a lengthy report of recommendations that no one read and took weeks to produce.
This talk covers how to adapt the usability lab to include discovery and co-creation, yet still record results rigorously while completing analysis and reporting within a couple of days.
It also covers how to counter the common objections to user feedback (“its only 5 users”, “it’s just anecdotes”) and how to use the lab to get stakeholders on side.
Similar to GReAT Aphasia Technology Event January 2012 (20)
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
1. Aphasia and Technology:
The GReAT Project
Abi Roper and Jane Marshall
On behalf of the GReAT Project Team
Division of Language and Communication Science
Department of Human Computer Interaction Design
City University London
2. Presentation Outline
•The Project
•Designing and Refining a Computer Gesture
Therapy - Gest
•Gest Demonstration
•Delivering a Computer Therapy
•Gest Pilot Study
•Preliminary Outcomes
3. Project Aims
• To develop an affordable, computer-based
technology that can be used in therapy at home to
help people with severe aphasia to gesture.
• To establish how to design effective/engaging
interactions for people with aphasia.
• To evaluate the efficacy of the technology within a
pilot therapy study
4. Project Structure
• Phase 1: Designing a prototype gesture
therapy using participatory design
methods.
• Phase 2: Testing and piloting the
prototype
5. Project Team
Human Computer Interaction Design & Language and Communication Science
Stephanie Wilson Sam Muscroft Julia Galliers Jane Marshall
Naomi Cocks Tim Pring Abi Roper
6. Phase 1
• Designing a prototype gesture
therapy using participatory
design methods.
8. Consultants
•Role: to test and feedback about relevant technology.
•Person Specifications:
–Expressive aphasia language difficulties.
–Able to attend university once or twice a month for
participatory design sessions.
•Recruited through in house clinic and through links with
the Stroke Association Communication Support
Co-ordinators.
•Employed by City University London as Casual Staff
members.
9. Methods: Participatory Design
Sessions
•Participatory design – engaging end users in design
process
•Sessions explored offline gesture therapy, computer
gesture recognition, interaction within 3D worlds and
computer interfaces.
•Consultants took part in 9 sessions each
•Project team involved in each session
- 1 HCID Researcher
- 1 HCID Developer
- 1 Speech and Language Therapist Researcher
- 2 or 3 Consultants
10. Session Structure
1. Introduction to scheduled
activities
2. Round table gesture activity
3. Demonstration of Technology
4. Trial use of technology by one
consultant - followed by
interview at computer
5. Tea break
6. Trial use of technology by
remaining consultant(s)
12. What did we learn from the
Sessions?
1. Consistency
2. Simplicity
3. Pace
4. Reliability
5. Rewards
6. Individual Differences
7. Potential of ‘gaming’.
15. Using the Therapy at home
•How does this work at home?
Key differences between lab and
home –
User practising
independently, User intending to
practise daily. User practising in
non-lab conditions.
16. Things to consider when setting
up
•Lighting conditions
•Safety and permanence
(negotiate!)
•User comfort and access
17. Things to consider when training
•Develop the user’s confidence in the system.
(Be confident yourself)
Demonstrate:
1. Allow user to observe entirely
2. Allow user to observe and operate
interaction buttons
3. Allow user to operate alone but with
support as needed (confidence)
18. Things to consider when training
•Reinforce how to switch the computer
on and off several times.
•Make an appointment to come back in
one week to review.
•At review appointment, observe and re-
train difficult procedures.
20. Questions
• Will practice with Gest improve participants’ production of
gestures &/or spoken words?
• Will improvements be specific to items that feature in the
programme?
• Will gains occur when Gest is used without ongoing
therapist support?
• Will gains be maintained after Gest is withdrawn?
• What are participants’ views about Gest?
• What are carers’ views about Gest?
(where relevant)
• Is Gest easy and enjoyable to use?
21. Participants
• 10 people with severe aphasia
– Consent to take part
– Fluent pre-stroke users of English
– Naming score <20%
– Able to recognise pictures
– No known dementia or other cognitive impairment
22. Consent
Screening
Phase 1 with weekly
Tests (1)
visits from therapist
3 Weeks
Practice
Phase 2 with no weekly
Tests (2)
visits from therapist
3 Weeks
Practice
Tests (3)
3 weeks
no tool
Total time commitment: about 14 weeks Tests (4)
23. Practice Phases
• Each last 3 weeks
• Each practise 15 gestures with the tool
• Phase 1: Weekly visits from therapist
• Phase 2: Initial but no weekly visits
24. Tests
• 60 items
– Gesture from picture
– Name from picture What is the
name of this?
How would
you gesture
Items: this?
30 practised with Gest
15 familiarised only
15 controls
25. Scoring Gestures
• Gesture tests are filmed
• 4 Scoring videos created
• Each video contains 60 gestures in random
order:
– 15 from test 1
– 15 from test 2
– 15 from test 3
– 15 from test 4
26. Scoring Gestures
• Scores
– Recognition Score
– Rating Score
• Scorers are ‘blind’ to the time of assessment
27. Usability Evaluations
• Observe participants using the tool
• Interview participants
• Interview carers (if relevant)
– When technology is installed
– After each practice phase
28. Usage Logs
• Record
– Number of sessions
– Length of sessions
– Levels of programme accessed
– Number of gestures recognised
30. Mean Usage: 7 Participants
60
50
40
30
20
10
0
Days No of Time used Time per
available sessions (hrs) session
(mins)
31. Individual Usage: 3 participants
80
70
60
50
40
30
20
10
0
Days available No of Time used Time per
sessions (hrs) session (mins)
32. Usage x Recognition
80 300
70 250
60
50 200
40 150
30
20 100
10
0 50
Days No of Time used Time per 0
available sessions (hrs) session Recognition score
(mins)
33. Mean Usage over Phases
28 400
27 350
26 300
250
25 Supported
200
24 Independent
150
23 100
22 50
21 0
No of sessions Time spent (mins)
34. Usage: Levels
• Three participants use level 1 more than 2 & 3
• Two participants use all 3 levels and rate them
equally highly
• Two participants rate levels 2 & 3 more highly than 1
• Possibly contingent on navigation abilities
35. Usage Observations: Challenges
• Set up
– Lighting
– Positioning (e.g. wheelchairs)
– Security
• Glove
– Putting glove on the wrong hand
– Using the peg board (although often not
necessary)
36. Usage Observations: Challenges
• Starting and stopping
– Pressing key board buttons before menu has
appeared
– Not always pressing ‘off’ at end of session
37. Usage Observations: Challenges
• Navigation
– Variable use of OK, forward, back & menu buttons
– Variable navigation between levels
– Some unprincipled button pushing
Speed and competence may relate to prior
computer usage
38. Usage Observations: Challenges
• Gesture production
– Knowing when to gesture; waiting for 321 ping
– Knowing when the gesture has been recognised
– Variable use of cues; e.g. some adjust handshape
in response to glove image others do not
39. Usage Observations: Enjoyment
• All signal high enjoyment levels
– Thumbs up sign
– Drawn smiley face
• Positive reactions to level 2
– Game format
– Narrative context
– Environments
40. Usage Observations: Enjoyment
• Positive reactions to level 3
– Humour (spider, dentures)
– Stroke survivors as actors
– Presence of children
41. Other Observations
• Some target spoken words produced during
Gest use
• Spontaneous uses of practised gestures
(‘umbrella’ gestured when participant noticed
that it was raining outside; ‘child’ gesture
when talking about grandchild)
43. Independence of Use
• ‘She uses it all on her own, I don’t know how to
operate it’
• The first session I stayed with L, after that I’ve
helped only if she’s found something
particularly frustrating’
• All comment that the participant initiated use
of Gest
44. Enjoyment
• All say that the participant enjoyed Gest
• ‘he likes it when they clapped’
• ‘some of the gestures are particularly fitting
and she enjoyed rainbow’
45. Views about Technology
• ‘I was a technophobe and when they said
‘computer’ I thought it was going to cause
problems. I thought I wouldn’t understand
and he wouldn’t understand it. But it’s so
‘easy’
46. Reservations
• Carry over to real life (1 carer):
• ‘while she works on it here (points to
computer) it doesn’t necessarily translate’
• She wanted a hankie last night and didn’t
make a gesture’
47. Conclusions
• Gest was created through participative design
involving people with aphasia
– It offers 6 packages of hierarchical practice on 30 gestures
– It is accessible even to people with severe strokes
– It can be used successfully in diverse home settings
– It allows for flexible, self directed practice and is typically
intensively used
– It is enjoyable to use, with no reports of increased ‘carer
burden’
48. Conclusions
• But we do not know if
– Gest improves gesture production
– Gest improves spoken naming
– Effects generalise to unpractised targets
– Effects are maintained
• The results of the pilot study will give us
answers to these questions
49. Acknowledgements
The Research Councils UK Digital
Economy Programme
The Stroke Association
Consultants and their families
Participants and their families
Thank You
GReAT@city.ac.uk
www.soi.city.ac.uk/great