This document describes a tool developed to measure continuous complexity in software. The tool measures complexity along three dimensions: number of steps, number of context shifts, and working memory load. It provides faster feedback to developers during the development process. While focused on continuous complexity, it also allows developers to document cases of discontinuous complexity, where usability is severely hindered. The goal is to provide a practical way for developers to quantify and reduce complexity throughout the development cycle.
Efficient47 use or usability48. To my way of thinking both terms mean the same thing, and in reality were going to be talking about usability. However, you should be thinking of usability in more general terms; the more general terms of efficient use. This is because the concept of usability is much broader than the narrow confines of ‘Task Completion Time’ it is often associated with [9241-100:2011, 2011] usability, in the context of UX, seems to be simple but in reality can be quite taxing. Complex computerised systems and interfaces are becoming increasingly widespread in everyday usage, components of computerised systems are embedded into many commercial appliances. Indeed, the miniaturisation of computerised systems, often found in mobile telephones and personal digital assistants, is on the increase.
Efficient47 use or usability48. To my way of thinking both terms mean the same thing, and in reality were going to be talking about usability. However, you should be thinking of usability in more general terms; the more general terms of efficient use. This is because the concept of usability is much broader than the narrow confines of ‘Task Completion Time’ it is often associated with [9241-100:2011, 2011] usability, in the context of UX, seems to be simple but in reality can be quite taxing. Complex computerised systems and interfaces are becoming increasingly widespread in everyday usage, components of computerised systems are embedded into many commercial appliances. Indeed, the miniaturisation of computerised systems, often found in mobile telephones and personal digital assistants, is on the increase.
Seconda: A tool for analysing software ecosystemsTom Mens
Presentation by Javier Perez, Software Engineering Lab, University of Mons, Belgium. Presented during the CSMR 2012 conference in Szeged, Hungary.
Software ecosystems are coherent collections of software projects that evolve together and are main- tained by the same developer community. They exhibit some particular evolution features because of the dependencies between the software projects and the interactions between the community members. Tools for analysing and visualising the evolution of software ecosystems must take these aspects into account. SECONDA is a software ecosys- tem visualization and analysis dashboard that offers both individual and grouped analysis of the evolution of projects and developers belonging to the software ecosystem, at coarse-grained and fine-grained level. Using GNOME as a case study, we use SECONDA to study these ecosystem and community aspects.
Development of a (standalone) Human Communication Assistance Manager for Scientific Workflows, otherwise known as the Scientific-Workflows-for-Humans Project (SW4H) for allowing human influence and integration into large scale data-centric scientific workflow management systems (SWfMSs) through pluggable communication devices.
Seizing Opportunities, Overcoming Productivity Challenges in the Virtually Co...Cognizant
By following a few simple rules, organizations can overcome the barriers to social and virtual ways of working, including concerns about distractions, personal detachment and business disruption.
Chapter 19: Groupware
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
In this paper we deal with the impact of multi and many-core processor architectures on simulation. Despite the fact that modern CPUs have an increasingly large number of cores, most softwares are still unable to take advantage of them. In the last years, many tools, programming languages and general methodologies have been proposed to help building scalable applications for multi-core architectures, but those solutions are somewhat limited. Parallel and distributed simulation is an interesting application area in which efficient and scalable multi-core implementations would be desirable. In this paper we investigate the use of the Go Programming Language to implement optimistic parallel simulations based on the Time Warp mechanism. Specifically, we describe the design, implementation and evaluation of a new parallel simulator. The scalability of the simulator is studied when in presence of a modern multi-core CPU and the effects of the Hyper-Threading technology on optimistic simulation are analyzed.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Seconda: A tool for analysing software ecosystemsTom Mens
Presentation by Javier Perez, Software Engineering Lab, University of Mons, Belgium. Presented during the CSMR 2012 conference in Szeged, Hungary.
Software ecosystems are coherent collections of software projects that evolve together and are main- tained by the same developer community. They exhibit some particular evolution features because of the dependencies between the software projects and the interactions between the community members. Tools for analysing and visualising the evolution of software ecosystems must take these aspects into account. SECONDA is a software ecosys- tem visualization and analysis dashboard that offers both individual and grouped analysis of the evolution of projects and developers belonging to the software ecosystem, at coarse-grained and fine-grained level. Using GNOME as a case study, we use SECONDA to study these ecosystem and community aspects.
Development of a (standalone) Human Communication Assistance Manager for Scientific Workflows, otherwise known as the Scientific-Workflows-for-Humans Project (SW4H) for allowing human influence and integration into large scale data-centric scientific workflow management systems (SWfMSs) through pluggable communication devices.
Seizing Opportunities, Overcoming Productivity Challenges in the Virtually Co...Cognizant
By following a few simple rules, organizations can overcome the barriers to social and virtual ways of working, including concerns about distractions, personal detachment and business disruption.
Chapter 19: Groupware
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
In this paper we deal with the impact of multi and many-core processor architectures on simulation. Despite the fact that modern CPUs have an increasingly large number of cores, most softwares are still unable to take advantage of them. In the last years, many tools, programming languages and general methodologies have been proposed to help building scalable applications for multi-core architectures, but those solutions are somewhat limited. Parallel and distributed simulation is an interesting application area in which efficient and scalable multi-core implementations would be desirable. In this paper we investigate the use of the Go Programming Language to implement optimistic parallel simulations based on the Time Warp mechanism. Specifically, we describe the design, implementation and evaluation of a new parallel simulator. The scalability of the simulator is studied when in presence of a modern multi-core CPU and the effects of the Hyper-Threading technology on optimistic simulation are analyzed.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Interactive systems are increasingly interconnected across different devices and platforms. The challenge for interaction designers is to meet the requirements of consistency and continuity across these platforms to ensure the inter-usability of the system. This presentation describes the current challenges the designers are facing in the emerging fields of interactive systems. Through semi-structured interviews of 17 professionals working on interaction design in different domains we probed into the current methodologies and the practical challenges in their daily tasks. The identified challenges include but are not limited to: the inefficiency of using low-fi prototypes in a lab environment to test inter-usability and the challenges of “seeing the big picture” when designing a part of an interconnected system.
Abstract
Researchers in the field of software engineering, business process improvement and information engineering all want to drastically modernize software life-cycle processes and technologies to correct the problems and to improve the quality of software. Research goals have included ancillary issues, such as improving user services through conversion to new platforms and facilitating software processes by adopting automated tools. Automated tools for software development, understanding, maintenance, and documentation add to process maturity, leading to better quality and reliability of computer services and greater customer satisfaction. This paper focuses on critical issues of legacy program improvement. The program improvement needs the estimation of program from various perspectives. The paper highlights various elements of legacy program complexity which further can be taken in account for further program development.
Keywords: Legacy, Program, Software complexity, Code, Integration
A review paper: optimal test cases for regression testing using artificial in...IJECEIAES
The goal of the testing process is to find errors and defects in the software being developed so that they can be fixed and corrected before they are delivered to the customer. Regression testing is an essential quality testing technique during the maintenance phase of the program as it is performed to ensure the integrity of the program after modifications have been made. With the development of the software, the test suite becomes too large to be fully implemented within the given test cost in terms of budget and time. Therefore, the cost of regression testing using different techniques should be reduced, here we dealt many methods such as retest all technique, regression test selection technique (RTS) and test case prioritization technique (TCP). The efficiency of these techniques is evaluated through the use of many metrics such as average percentage of fault detected (APFD), average percentage block coverage (APBC) and average percentage decision coverage (APDC). In this paper we dealt with these different techniques used in test case selection and test case prioritization and the metrics used to evaluate their efficiency by using different techniques of artificial intelligent and describe the best of all.
Talk on how to repair the digital divide among political factions. Suggested socio-technical pattern language for intelligent discourse. John C. Thomas
A Perfect Storm: Ubiquity and Social ScienceJohn Thomas
A keynote talk at a Ubicomp 2014 workshop. This talk looks at the opportunities for social science due to ubiquitous computing and offers some techniques for problem finding, problem formulation and problem reframing.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Note on Tool to Measure Complexity
1. A Tool to Measure Continuous Complexity
John C. Thomas, John T. Richards, Cal Swart, Jonathan Brezin
IBM T.J. Watson Research Center
PO Box 218 Yorktown Heights New York 105098 USA
ABSTRACT complexity” are often used interchangeably although
In this paper we describe two types of complexity; “psychological complexity” is potentially a broader term
continuous complexity and discontinuous complexity. We that could encompass, for example, the emotional
describe a tool to measure continuous complexity that we complexity of relationships, the perceptual complexity of a
developed with an eye both to psychological theory and to fine Cognac or the motor complexity of a finely tuned golf
the practicality of the software development process. The swing. Although these types of complexity might impact
use of the tool requires developers to change perspective usability, investigators in Human Computer Interaction
from system functionality to the tasks required of the users have generally focused on “cognitive complexity.” [6]
and this in itself has value. The quantitative output of the
tool provides metrics to measure progress in reducing It is useful to distinguish between intrinsic complexity (that
overall complexity as well as pin-pointing where a required supplied by the nature of the environment or the task itself)
task is particularly complex. Although this tool focuses on and gratuitous complexity (additional complexity imposed
continuous complexity, it also allows the developer to by a tool, system, or artifact). In general, we want to
document instances of discontinuous complexity. We minimize gratuitous complexity although in a few contexts,
illustrate the output of the tool for anonymized installations. that may not be the case. For instance, in vigilance tasks,
educational, aesthetic, or entertainment settings,
Author Keywords minimization of complexity might not be the goal.
Complexity, metrics, usability, modeling, methods, tools. Regardless of just how complex one wishes to make a
system, however, having a reliable and valid way to
ACM Classification Keywords measure complexity is important.
H5.m. Information interfaces and presentation
Miscellaneous; H.1.2 Models and principles, User/machine APPROACHES TO MEASURE COMPLEXITY
systems; I.6.3 Simulation and modeling, applications. One useful approach to cognitive complexity is the work on
the “Cognitive Dimensions of Complexity.” [4]. It is
INTRODUCTION probably better thought of as a useful tool to aid discussion
Complexity is a widely used concept in fields as diverse as than as a “metric” of complexity. More quantitative
biology, computation, economics and psychology. Here, we methods of modeling human behavior include notably,
are concerned primarily with the psychology of complexity. GOMS [2] and EPIC [5]. The use of these models is not
The terms “psychological complexity” and “cognitive limited to human-computer interaction, but they can
certainly be applied there. In cases where an economically
significant number of users will be using a product for a
significant amount of time, such approaches can be quite
useful [3]. However, in other cases, systems and
applications are developed for a small number of users;
increasingly, end users are even creating essentially ad hoc
applications for themselves. In addition, many applications
and systems are subject to updates that are made so
frequently that such an extensive approach to modeling is
not economically feasible. Finally, such fine modeling is
1
2. best done after special training in addition to a background increases the likelihood of adoption as well as acceptance
in cognitive psychology. For these reasons, we set out to of results. That may or may not be ideal, but it is a reality
develop a tool to help developers quickly obtain reliable that faced our team as designers.
and useful quantitative metrics about the complexity of
what they were building. The Utility of Faster Feedback in Learning
Studies have long indicated that delayed feedback can be
CONTINUOUS AND DISCONTINUOUS COMPLEXITY very confusing and disruptive; for instance, talking with
Our main goal was to develop a tool that produced metrics delayed auditory feedback or watching one’s motor
of complexity. We envisioned this to be something that performance with delayed visual feedback can be very
would give metrics in terms of a fairly continuous scale. In disruptive. When feedback is delayed, it can also make
our experience both with observing usability and in learning extremely difficult. Mere passage of time makes
attempting to build usable applications, we also see cases learning more difficult and in addition, in the real world, it
where the most natural description is that usability is often makes the attribution of error (and therefore, choosing
discontinuous. That is, the user does not simply feel among potentially corrective strategies) more ambiguous.
somewhat more frustrated, take a little longer to do a task, There is a trade-off therefore, between tools that provide the
or make a few more errors along the way. Rather, it often greatest verisimilitude to real world usage (which requires,
happens that the user is completely prevented from making ultimately, real users using the tool with real
any further progress without outside intervention. For this documentation, real product and real support systems) and
reason, we wanted to include in the tool a simple facility for those that are available as early as possible in the design-
documenting such cases. development-deployment life cycle.
As an example of discontinuous complexity, an installation
THE MODEL UNDERLYING THE TOOL
process was meant to install several components and the The complexity model underlying the tool is based on the
installation kept failing. The associated log file was empty. work of Brown, Keller and Hellerstein [1]. This model
After several tries, the expert user attempted to install one measures complexity along three dimensions: the number
of the components by itself. In this case, an informative of steps in a process, the number of context shifts, and the
error message returned indicating that there was not enough working memory load required at each step.
memory. The user added memory, re-ran the installation
and succeeded. In this case, an underlying informative Rationale for Number of Steps
error message was somehow “blocked” in the over-arching Of course, not all “steps” are equal and so using the sheer
process and not surfaced despite its criticality. number of steps as a metric is somewhat arbitrary.
However, in most of the tasks we have studied so far
THE CONTEXT OF OUR MODEL (installation, configuration, and simple administration), the
Special purpose tools are ideally developed with respect to steps can be defined fairly objectively. In GUI’s, every
a particular context, set of tasks, and set of users. In our new screen is considered one step. In line-oriented
case, we wanted to develop a tool that would be useful to interfaces, every “Enter” is considered another step.
developers doing actual software development. Typically, in comparing alternative products or various
versions of one product, the “steps” are fairly similar in
The Software Development Process
complexity (except as captured in the other two metrics;
Software development has become, in many ways, a race
i.e., memory load and context shifts). There are two major
against time. While it is difficult to amass overall statistics,
dissatisfactions or shortcomings with the model as applied
even the potential outliers provide some insight. For
to straight-line processes. One is that it does not capture the
example, one software system issued successive releases on
complexity of the reading that is required either on the
6-19-01, 7-23-01 and 8-17-2001. Another site lists release
screen or with accompanying documentation. The second
dates as 4-20-2005, 7-7-2005, 9-28-2005, 2-28-2006, and
is that it does not measure how much background
3-10-2006. Realistically, how much do such schedules
knowledge is required to decide which items need to be
allow for unit and functional testing, let alone user testing
noted for future reference. Nonetheless, in general, as
or constructing detailed psychological modeling? The
processes gain more steps, there is a fairly uniform increase
educated guess would be, little time indeed.
in the chance of an error and certainly, an increase in time.
As these tasks are performed in the real world, each
The Culture of Metrics and Tools
additional step also increases the probability of being
Our particular corporate culture places a high value on
interrupted by some other task which again increases both
“objective” measurement. To the extent that we can
the chance of error and requires added time to recover state.
provide a tool that offers a way to measure complexity with
a minimum of interpretation on the input side and a
Rationale for Memory Load
maximum of quantification on the output side, that
Memory load is increased whenever the user sees
3. something on a screen that must be stored and used for consume data, so it should be convenient to enter new items
some future step. Again, in detail, we know that the actual of any of these types at any point. To answer this need, we
memory load will depend on the type of item that needs to used a tabbed display, one tab each for actions, data, and
be stored and on the user’s experience and strategies. contexts, and one for the final scoring of the model. (The
However, as a first approximation, each new “item” that the tabs are implemented using the Dojo JavaScript toolkit.)
user must take note of and remember, increases felt
The model for a single task is sufficiently small that it can
complexity as well as increasing the chance for error. Even
easily be kept in JavaScript data structures, so moving
without error, it takes longer to recover a particular item
between tabs involves no delay. When, for instance, the
from working memory if there are more items in working
actions tab is selected, a list of actions is shown (in the
memory.
order in which they occurred), and one action is shown as
selected. The selected action’s details are visible for
Rationale for Context Shifts
Context shifts were originally defined by the model builders editing, and buttons are used to maintain the list: inserting a
in terms of computing contexts (server vs. client, e.g., or new action, reordering those already there, or marking the
operating system vs. data base). We have kept such points at which context switches occur. Editing changes are
changes as context shifts but broadened the definition to immediately transmitted by an asynchronous HTTP POST
include shifts between applications or between installation to the server, where they are written to a database that is
components. If an installation requires the installation of used to maintain the persistent state of the task model. The
three sub-components, these components often have data and context tabs differ from the actions tab only in
somewhat different appearances and conventions. their view of the content of the model.
Context shifts can be disruptive to working memory. In The relational database is straightforward: it requires tables
addition, different contexts often employ different for tracking users, tasks, actions, data, contexts, and data
conventions and this can cause interference resulting in usage. The latter tracks pairs of data items and actions to
longer latencies, a greater chance of error, or both. record which actions produce, and which consume, what
data. The users table allows us to use a single database to
ITERATIVE TOOL DEVELOPMENT PROCESS serve many users. Each task table entry has a column that
The original model that we built on took a detailed XML indicates which user’s task it is. Each of the remaining
description of the task as input. We thought it unlikely that tables (actions, etc.) has a column that indicates which task
developers would use a complexity metric that required it belongs to. Thus each action, etc. belongs to a single
this. Therefore, we developed a GUI tool to allow users to task, and each task to a single user.
define tasks, action steps, context switches, and memory For purposes of communicating the model to other
load without having to use XML. The tool was used by a applications that might wish to use it, there is an XML
small group of people for some months. Interviews, Schema that describes an XML document for the raw data
observations, and spontaneous comments were all used to associated with a single task. There will be a forthcoming
drive a continuous round of changes in the interface. Schema for the scored model as well. One common and
convenient method of working is to open two instances of
THE STRUCTURE OF THE TOOL AND DATABASE the tool; one where successive action steps are noted and
The tool we developed to facilitate the data entry required one where data items are noted in order to calculate
by the model is a classic dynamic HTML application with memory load. In some cases, tool users watch an expert
the persistent data held in a relational database at the server. perform a task, take notes and then code the result. In other
The client is implemented with XHTML and JavaScript, cases, an expert performs a task such as installation and
and the server with PHP and MySQL. captures each step via screen shots which are then sent to
There are two phases from the point of view of user input: the tool user for coding.
creating or choosing the task to work on, and for a fixed
task, entering its details. Working with the task list is EXAMPLE CONTINUOUS RESULTS
simple and handled easily with a drop-down list and a The tool allows developers and managers to calculate
button or two. The essential problem is to use the screen overall metrics for their products and to gauge progress
real-estate effectively to enter the details for a fixed task, through successive releases. Figures 1 and 2 respectively
which involves two parallel lists, actions, and data, both of show anonymized results for installation of comparable
which can be expected to be of the order of several tens products in terms of steps, and memory load, respectively.
long, perhaps a hundred or so. There is also a much smaller The first figure shows that products differ significantly in
list of contexts, less than half a dozen or so in the vast number of steps required and that custom installs require
majority of cases. only a few more steps than taking all the defaults.
Actions take place in contexts, and they produce and
3
4. Figure 3. Blue “spikes” illustrate action steps that require a
particularly high memory load.
USAGE
The tool has had 75 people register to use it. Interviews
with a subset of users finds general agreement that the
current user interface is relatively straightforward to use
and a significant improvement over the first iteration. The
tool is being used by personnel in a number of different
product lines within our company. It is used by both
developers and by UI practitioners.
CONCLUSION
The tool, though based on a simplified model of cognitive
complexity, is useful and usable by developers during
development. To the extent that development teams take
the effort to really engage in “Outside-In-Design,” and
Figure 1. Action steps needed to install comparable products specify relatively detailed user tasks in advance of system
when taking all defaults and for a custom installation. design, the tool can be used even earlier in the overall
Figure two shows, however, that custom installs require system development process. The tool helps management
considerably more memory load. Taken together these determine roughly the competitive position of their
figures illustrate that simple but meaningful comparisons products with respect to complexity and whether progress
between whole software packages are possible using our toward simplification is being made with successive
tool. versions. For developers and HCI professionals, the tool
also provides pointers to those places in a task which
require a particularly large memory load thereby focusing
efforts to improve usability.
ACKNOWLEDGMENTS
We thank anonymous reviewers as well as our colleagues
for comments and suggestions regarding this paper. We
thank our users of the tool and our funders.
REFERENCES
1.Brown, A. B., Keller, A. and Hellerstein, J. L. (2005), A
model of configuration complexity and its application to a
change management system. In Proceedings of the Ninth
IFIP/IEEE International Symposium on Integrated
Network Management. (IM 2005).
2.Card, S.K., Moran, T.P., and Newell, A. (1983), The
Figure 2. Memory load needed to install comparable products
when taking all defaults and for a custom installation. psychology of human-computer interaction. Hillsdale,
N.J.: Erlbaum.
3.Gray, W.D., John B.E., Stuart, R., Lawrence, D., &
In addition to allowing overall comparisons, the tool can Atwood, M.E. (1990), GOMS meets the phone company:
help pinpoint specific areas of complexity in terms of Analytic modeling applied to real-world problems. In
memory load as shown in Figure 3, below. Proceedings of IFIP ’90: Human Computer Interaction.
29-34.
4. Green, T.G.R. (1989), Cognitive dimensions of
notations. In A. Sutcliffe & L. Macualay (Eds.), People
and computers V. Cambridge: Cambridge Unverity Press.
5. Kieras, D. and Meyer, D. (1997). An overview of the
EPIC architecture for cognition and performance with
application to human-computer interaction. Human-
Computer Interaction, 12, 391-438.
5. 6.Rauterberg, M. (1996). How to measure cognitive Society for Cybernetic Studies.
complexity in human-computer interaction. In
Cybernetics and Systems ’96, 815-820. Vienna: Austrian
The columns on the last page should be of approximately equal length.
5