[Video link below] The single largest reason for user rejection of a language is how it looks. And let’s be honest, even most successful languages are pretty ugly – from the sea of angle brackets that is XML to the monochrome lines of UML. In this session we will look at how to design languages to make best use of the limited input, processing and output capabilities of the weakest link in software development: humans. Cognitive and empirical research has produced a number of results that are often surprising, always enlightening, yet all too rarely used. We will look at these results and examples of good and bad languages, both textual and graphical, focusing on their concrete syntax.
Video from Code Generation 2012: http://www.infoq.com/presentations/Language-Design
Design Scripts: Designing (inter)action with intent Bas Leurs
The document discusses design scripts, which are ways that artifacts can prescribe or influence how users interact with and behave around the artifact. Some examples are given, like how speed bumps are designed to signal drivers to slow down. The document also discusses how designers aim to predict how users will interact with their designs and shape user behaviors. Finally, it notes that the principles of design scripts can be found in many fields that aim to influence human cognition, attitudes, and behaviors through the design of artifacts and environments.
This is a presentation about the fusion of Collaboration and Learning. It shows that it is possible to map the theory of collaboration to the the theory of learning.
This document discusses applying Howard Gardner's theory of multiple intelligences to legal education. [1] It outlines Gardner's eight intelligences and their relevance to lawyering skills. [2] Currently, legal education primarily values linguistic and logical intelligences through the Socratic method and exams. [3] The document proposes alternative teaching methods that engage different intelligences, such as simulations, group work, and experiential learning. This could improve legal instruction and make evaluation more comprehensive.
The document discusses artificial intelligence and the ability to make optimizing choices based on inputs, outputs, and rewards. It explores examples of AI systems like a chess player and vacuum cleaner and discusses generalizable intelligence. It covers philosophy around bias, induction, simplicity, and using the smallest models that can accurately predict patterns. The document emphasizes learning patterns from inputs using world models to make predictions and optimize choices.
1. Four themes are emerging from exploring new advisory board approaches: connecting all locations, regional hubs, showcasing for executives, and engaging the client ecosystem.
2. The document discusses using a mix of virtual and physical spaces to address client issues. It proposes analyzing collaborative tools, inventorying use cases, and mapping process steps and scales of change.
3. Areas to research include community/collaboration spaces, immersion techniques, and tools for complex conceptualization. The objective is to define requirements for virtual, physical, and projection spaces in terms of architecture, tools, and design.
Winning Big in UX: Changing the problem-solving culture in organizations.Jay Morgan
Cognitive biases warp our perceptions of organizational culture and the problems we can help the organization solve. Biases also warp our clients' view of us, their expectations of what we can contribute, and their mental models of the role(s) we can play.
I encourage you to:
- Recognize that you are member of UX as a culture, which goes beyond your role.
- Be an agent of culture change by merging UX culture with the organizational culture in which you are immersed.
- Help organizational culture adopt the practices and values of UX culture.
- Apply your UX skills to improve problem-solving and decision-making as a way to merge the two cultures.
Presented on Sun, April 10, 2011; at MidwestUX Conference in Columbus, OH.
Creating Documentation Your Users Will LoveEna Arel
What are users looking for in technical documentation? This presentation describes 10 best practices in information development based on usability testing.
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Design Scripts: Designing (inter)action with intent Bas Leurs
The document discusses design scripts, which are ways that artifacts can prescribe or influence how users interact with and behave around the artifact. Some examples are given, like how speed bumps are designed to signal drivers to slow down. The document also discusses how designers aim to predict how users will interact with their designs and shape user behaviors. Finally, it notes that the principles of design scripts can be found in many fields that aim to influence human cognition, attitudes, and behaviors through the design of artifacts and environments.
This is a presentation about the fusion of Collaboration and Learning. It shows that it is possible to map the theory of collaboration to the the theory of learning.
This document discusses applying Howard Gardner's theory of multiple intelligences to legal education. [1] It outlines Gardner's eight intelligences and their relevance to lawyering skills. [2] Currently, legal education primarily values linguistic and logical intelligences through the Socratic method and exams. [3] The document proposes alternative teaching methods that engage different intelligences, such as simulations, group work, and experiential learning. This could improve legal instruction and make evaluation more comprehensive.
The document discusses artificial intelligence and the ability to make optimizing choices based on inputs, outputs, and rewards. It explores examples of AI systems like a chess player and vacuum cleaner and discusses generalizable intelligence. It covers philosophy around bias, induction, simplicity, and using the smallest models that can accurately predict patterns. The document emphasizes learning patterns from inputs using world models to make predictions and optimize choices.
1. Four themes are emerging from exploring new advisory board approaches: connecting all locations, regional hubs, showcasing for executives, and engaging the client ecosystem.
2. The document discusses using a mix of virtual and physical spaces to address client issues. It proposes analyzing collaborative tools, inventorying use cases, and mapping process steps and scales of change.
3. Areas to research include community/collaboration spaces, immersion techniques, and tools for complex conceptualization. The objective is to define requirements for virtual, physical, and projection spaces in terms of architecture, tools, and design.
Winning Big in UX: Changing the problem-solving culture in organizations.Jay Morgan
Cognitive biases warp our perceptions of organizational culture and the problems we can help the organization solve. Biases also warp our clients' view of us, their expectations of what we can contribute, and their mental models of the role(s) we can play.
I encourage you to:
- Recognize that you are member of UX as a culture, which goes beyond your role.
- Be an agent of culture change by merging UX culture with the organizational culture in which you are immersed.
- Help organizational culture adopt the practices and values of UX culture.
- Apply your UX skills to improve problem-solving and decision-making as a way to merge the two cultures.
Presented on Sun, April 10, 2011; at MidwestUX Conference in Columbus, OH.
Creating Documentation Your Users Will LoveEna Arel
What are users looking for in technical documentation? This presentation describes 10 best practices in information development based on usability testing.
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Textsl: a screen reader accessible virtual world client for second lifeEelke Folmer
Virtual worlds are not accessible to users who are visually impaired as they lack any textual representation that can be read with a screen reader. We developed an interface modeled after text based adventure games like zork that allows a screen reader user to iteratively interact with the popular virtual world of second life.
Beyond Buzz - Web 2.0 Expo - K.Niederhoffer & M.Smithkategn
This document discusses measuring conversations on social media platforms. It begins by outlining the goals of capturing the depth of discussion beyond superficial metrics like buzz or followers. It emphasizes the importance of understanding individuals by examining their language use and social network roles. Finally, it stresses analyzing the overall ecosystem by identifying the types of groups and roles that emerge within different discussion spaces. The key is moving beyond isolated metrics to understand the rich context and dynamics of online conversations.
The document discusses research on instruction that emphasizes congruent sensorimotor experience and visualization. This type of instruction has been found to improve comprehension, reading fluency, and problem solving abilities. The document also discusses how perceptual knowledge is transformed into conceptual knowledge and schema through identifying affordances of action and potential actions. This allows students to construct situation models to understand context, meaning, and usage.
Toward Tractable AGI: Challenges for System Identification in Neural CircuitryRandal Koene
This is the presentation I gave at AGI-12 (also called the Winter Intelligence 2012 conferece) in Oxford, UK, on Dec.11, 2012. There is an AGI-12 proceedings paper that accompanies this talk. I will make that available on my publications page at http://randalkoene.com and I will put both together on the http://carboncopies.org page about this event. The video (recorded by Adam Ford) should also appear soon.
Abstract. Feasible and practical routes to Artificial General Intelligence involve short-cuts tailored to environments and challenges. A prime example of a system with built-in short-cuts is the human brain. Deriving from the brain the functioning system that implements intelligence and generality at the level of neurophysiology is interesting for many reasons, but also poses a set of specific challenges. Representations and models demand that we pick a constrained set of signals and behaviors of interest. The systematic and iterative process of model building involves what is known as System Identification, which is made feasible by decomposing the overall problem into a collection of smaller System Identification problems. There is a roadmap to tackle that includes structural scanning (a way to obtain the “connectome”) as well as new tools for functional recording. We examine the scale of the endeavor, and the many challenges that remain, as we consider specific approaches to System Identification in neural circuitry.
This is the checklist used by students to help them self-assess to identify strengths and weaknesses of their essays.
Scroll down to the third page and you'll see the rubric I used for the final, summative assessment. It has the wrong title on it - 'belonging' oops!
Building on the Usability Study: Two Explorations on How to Better Understan...mprabaker
The document discusses two explorations into better understanding user interfaces. It examines measuring intuitiveness and emotional impact as part of evaluating user experience beyond traditional usability measures. To measure intuitiveness, it proposes combining novice and expert user performance evaluations. To measure emotion, it combines physiological, verbal and non-verbal techniques, including a PAD semantic scale and Emo-Card tool. An empirical study found these emotional measures detected differences between interfaces that traditional measures did not. The document concludes traditional usability measures may be missing valuable emotional aspects of user experience.
This document discusses textual analysis and interpretation from several perspectives:
1) A text is related to and influenced by its context, other texts, the world, and the reader/writer.
2) Texts represent reality through language but also construct their own reality, influenced by social and ideological forces.
3) Literary texts in particular can be analyzed to reveal how they present ideological constructions through certain techniques, representing the influence of social and historical context on both the writer and reader.
Gardner’s multiple intelligences planning grid with activity ideas and starte...Jacqui Sharp
This document provides a planning grid with activity ideas for each of Gardner's Multiple Intelligences. The grid lists verbs and starter words associated with each intelligence and provides examples of activities and tools that can be used to design lessons targeting each intelligence. It includes ideas for remembering, understanding, applying, analyzing, evaluating, and creating for the eight intelligences: verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinaesthetic, musical, interpersonal, intrapersonal, and naturalist.
This document discusses the history and recent developments in artificial intelligence and deep learning. It covers early work in neural networks from the 1950s through the 1990s, including perceptrons, autoencoders, and connectionism. More recent progress is attributed to greater computing power, larger datasets, and the development of automatic differentiation techniques. Applications discussed include computer vision, natural language processing using word embeddings, and recurrent neural networks for tasks like handwriting generation.
The document discusses Walt Disney's early experiments with animation techniques from the 1920s to 1940s. It notes that Disney borrowed a stop motion camera from his boss in the early 1920s to create hand-drawn animated films called "Laugh-O-Grams". In 1928, Disney experimented with synchronizing audio with film animation. From 1929-1939, more than 75 "Silly Symphonies" were created to further explore advances in sound, color, and animation. The Walt Disney Studios was also the first to experiment with technicolor in 1932 for the animated short "Flowers and Trees".
Social Aspects of Emotions in Twitter ConversationsAlice Oh
The document describes research into analyzing sentiments and emotions in Twitter conversations using topic modeling and sentiment analysis techniques. The researchers define primary and secondary emotions and discover topics in Twitter data that represent different sentiments and emotions. Patterns of sentiment and emotion transitions are analyzed to understand how emotions are communicated and influenced between conversation partners.
Intelligence is defined as the ability to think rationally, act purposefully, and effectively deal with the environment. There are different types and theories of intelligence. Intelligence tests aim to measure intelligence through individual or group tests that assess verbal, non-verbal, or performance abilities. Famous intelligence tests include the Stanford-Binet, Wechsler scales, and Raven's matrices. The Wechsler scales separately measure verbal and performance IQ through subtests, and the Stanford-Binet was influential in establishing the intelligence quotient score.
Utility and neuroscience: a mechanistic approach of decision-making and ratio...Benoit Hardy-Vallée, Ph.D.
This document discusses neuroeconomics, which is the study of the neural mechanisms of decision-making and their economic significance. It provides several definitions of neuroeconomics from the literature. The key methods of neuroeconomics include developing behavioral tests of decision tasks, comparing theory/data, and using various neural studies like imaging to understand the biological mechanisms underlying decisions. Some examples discussed are studies looking at neural responses related to pricing, risk/ambiguity, ultimatum games, and trust games. The document argues that mechanistic models of decision-making that identify specific causal entities and their interactions have advantages over other types of models in providing explanations and predictions that can be integrated with other domains. However, it notes that inferring preferences from
The document discusses designing user experiences and outlines 7 principles for creating optimal experiences:
1. Create a great first impression with attractive design.
2. Provide attentive service that anticipates user needs.
3. Allow for personalization and customization.
4. Pay attention to details.
5. Provide feedback to prevent frustration and manage expectations.
6. Make the experience fun through things like points and leaderboards.
7. Craft an environment like Starbucks or Virgin that enhances the overall experience.
This document describes the development of an interactive table called Fable designed for children with various abilities, including disabilities. It discusses the design process, prototype features, and future possibilities for Fable. The table allows up to four children, including those who are blind, deaf, use a wheelchair, or have cerebral palsy, to play collaborative games together. Features include accessible buttons and games to help teach lessons and foster learning, interaction, and fun. The creator hopes Fable can one day be scaled up or down in size and include more players to provide an inclusive space for all children to learn and play together.
This document discusses the importance and benefits of prototyping in user experience design. It outlines that prototypes are used to validate concepts, try out ideas at low risk, identify issues before implementation, sell visions to stakeholders, and bring teams together around a common design. Different prototyping techniques are appropriate for different purposes, from low-fidelity paper prototypes for early exploration to interactive prototypes for evaluating flows and transitions. Good prototypes put readers in the user's perspective, have an appropriate level of investment, can evolve over time, communicate the right level of detail, and are accessible to different teams. Prototyping is presented as a core part of an iterative design process.
This document discusses different types of prototypes and their uses. It begins by defining prototypes as ways to identify problems, try out ideas, identify issues, and bring teams together. It then describes different types of prototypes from static to interactive, and their appropriate uses. Key advantages discussed include validating concepts, exploring options quickly, and assessing application flow before production. The document emphasizes that good prototypes put the user first, have appropriate investment, communicate the right level of detail, and are changeable, accessible and help align teams. Overall it promotes prototyping as an important part of the design process.
1. The document discusses how the complexity of information design on webpages can affect cognition, emotion, and usability.
2. Eye tracking research shows that pages with medium complexity lead to more efficient processing, as seen in longer fixation durations and shorter saccade amplitudes.
3. Medium complexity pages are perceived as less effortful to process and lead to more positive evaluations, likely due to the more fluent processing they enable compared to lower or higher complexity pages.
Cognitive intersections: Meeting Narrative, Semiotics, and Neuroscience in Vi...Cody Mejeur
Presentation for International Narrative 2016 conference on proposed work bringing together narrative, semiotics, and cognitive neuroscience to study game narrative.
Textsl: a screen reader accessible virtual world client for second lifeEelke Folmer
Virtual worlds are not accessible to users who are visually impaired as they lack any textual representation that can be read with a screen reader. We developed an interface modeled after text based adventure games like zork that allows a screen reader user to iteratively interact with the popular virtual world of second life.
Beyond Buzz - Web 2.0 Expo - K.Niederhoffer & M.Smithkategn
This document discusses measuring conversations on social media platforms. It begins by outlining the goals of capturing the depth of discussion beyond superficial metrics like buzz or followers. It emphasizes the importance of understanding individuals by examining their language use and social network roles. Finally, it stresses analyzing the overall ecosystem by identifying the types of groups and roles that emerge within different discussion spaces. The key is moving beyond isolated metrics to understand the rich context and dynamics of online conversations.
The document discusses research on instruction that emphasizes congruent sensorimotor experience and visualization. This type of instruction has been found to improve comprehension, reading fluency, and problem solving abilities. The document also discusses how perceptual knowledge is transformed into conceptual knowledge and schema through identifying affordances of action and potential actions. This allows students to construct situation models to understand context, meaning, and usage.
Toward Tractable AGI: Challenges for System Identification in Neural CircuitryRandal Koene
This is the presentation I gave at AGI-12 (also called the Winter Intelligence 2012 conferece) in Oxford, UK, on Dec.11, 2012. There is an AGI-12 proceedings paper that accompanies this talk. I will make that available on my publications page at http://randalkoene.com and I will put both together on the http://carboncopies.org page about this event. The video (recorded by Adam Ford) should also appear soon.
Abstract. Feasible and practical routes to Artificial General Intelligence involve short-cuts tailored to environments and challenges. A prime example of a system with built-in short-cuts is the human brain. Deriving from the brain the functioning system that implements intelligence and generality at the level of neurophysiology is interesting for many reasons, but also poses a set of specific challenges. Representations and models demand that we pick a constrained set of signals and behaviors of interest. The systematic and iterative process of model building involves what is known as System Identification, which is made feasible by decomposing the overall problem into a collection of smaller System Identification problems. There is a roadmap to tackle that includes structural scanning (a way to obtain the “connectome”) as well as new tools for functional recording. We examine the scale of the endeavor, and the many challenges that remain, as we consider specific approaches to System Identification in neural circuitry.
This is the checklist used by students to help them self-assess to identify strengths and weaknesses of their essays.
Scroll down to the third page and you'll see the rubric I used for the final, summative assessment. It has the wrong title on it - 'belonging' oops!
Building on the Usability Study: Two Explorations on How to Better Understan...mprabaker
The document discusses two explorations into better understanding user interfaces. It examines measuring intuitiveness and emotional impact as part of evaluating user experience beyond traditional usability measures. To measure intuitiveness, it proposes combining novice and expert user performance evaluations. To measure emotion, it combines physiological, verbal and non-verbal techniques, including a PAD semantic scale and Emo-Card tool. An empirical study found these emotional measures detected differences between interfaces that traditional measures did not. The document concludes traditional usability measures may be missing valuable emotional aspects of user experience.
This document discusses textual analysis and interpretation from several perspectives:
1) A text is related to and influenced by its context, other texts, the world, and the reader/writer.
2) Texts represent reality through language but also construct their own reality, influenced by social and ideological forces.
3) Literary texts in particular can be analyzed to reveal how they present ideological constructions through certain techniques, representing the influence of social and historical context on both the writer and reader.
Gardner’s multiple intelligences planning grid with activity ideas and starte...Jacqui Sharp
This document provides a planning grid with activity ideas for each of Gardner's Multiple Intelligences. The grid lists verbs and starter words associated with each intelligence and provides examples of activities and tools that can be used to design lessons targeting each intelligence. It includes ideas for remembering, understanding, applying, analyzing, evaluating, and creating for the eight intelligences: verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinaesthetic, musical, interpersonal, intrapersonal, and naturalist.
This document discusses the history and recent developments in artificial intelligence and deep learning. It covers early work in neural networks from the 1950s through the 1990s, including perceptrons, autoencoders, and connectionism. More recent progress is attributed to greater computing power, larger datasets, and the development of automatic differentiation techniques. Applications discussed include computer vision, natural language processing using word embeddings, and recurrent neural networks for tasks like handwriting generation.
The document discusses Walt Disney's early experiments with animation techniques from the 1920s to 1940s. It notes that Disney borrowed a stop motion camera from his boss in the early 1920s to create hand-drawn animated films called "Laugh-O-Grams". In 1928, Disney experimented with synchronizing audio with film animation. From 1929-1939, more than 75 "Silly Symphonies" were created to further explore advances in sound, color, and animation. The Walt Disney Studios was also the first to experiment with technicolor in 1932 for the animated short "Flowers and Trees".
Social Aspects of Emotions in Twitter ConversationsAlice Oh
The document describes research into analyzing sentiments and emotions in Twitter conversations using topic modeling and sentiment analysis techniques. The researchers define primary and secondary emotions and discover topics in Twitter data that represent different sentiments and emotions. Patterns of sentiment and emotion transitions are analyzed to understand how emotions are communicated and influenced between conversation partners.
Intelligence is defined as the ability to think rationally, act purposefully, and effectively deal with the environment. There are different types and theories of intelligence. Intelligence tests aim to measure intelligence through individual or group tests that assess verbal, non-verbal, or performance abilities. Famous intelligence tests include the Stanford-Binet, Wechsler scales, and Raven's matrices. The Wechsler scales separately measure verbal and performance IQ through subtests, and the Stanford-Binet was influential in establishing the intelligence quotient score.
Utility and neuroscience: a mechanistic approach of decision-making and ratio...Benoit Hardy-Vallée, Ph.D.
This document discusses neuroeconomics, which is the study of the neural mechanisms of decision-making and their economic significance. It provides several definitions of neuroeconomics from the literature. The key methods of neuroeconomics include developing behavioral tests of decision tasks, comparing theory/data, and using various neural studies like imaging to understand the biological mechanisms underlying decisions. Some examples discussed are studies looking at neural responses related to pricing, risk/ambiguity, ultimatum games, and trust games. The document argues that mechanistic models of decision-making that identify specific causal entities and their interactions have advantages over other types of models in providing explanations and predictions that can be integrated with other domains. However, it notes that inferring preferences from
The document discusses designing user experiences and outlines 7 principles for creating optimal experiences:
1. Create a great first impression with attractive design.
2. Provide attentive service that anticipates user needs.
3. Allow for personalization and customization.
4. Pay attention to details.
5. Provide feedback to prevent frustration and manage expectations.
6. Make the experience fun through things like points and leaderboards.
7. Craft an environment like Starbucks or Virgin that enhances the overall experience.
This document describes the development of an interactive table called Fable designed for children with various abilities, including disabilities. It discusses the design process, prototype features, and future possibilities for Fable. The table allows up to four children, including those who are blind, deaf, use a wheelchair, or have cerebral palsy, to play collaborative games together. Features include accessible buttons and games to help teach lessons and foster learning, interaction, and fun. The creator hopes Fable can one day be scaled up or down in size and include more players to provide an inclusive space for all children to learn and play together.
This document discusses the importance and benefits of prototyping in user experience design. It outlines that prototypes are used to validate concepts, try out ideas at low risk, identify issues before implementation, sell visions to stakeholders, and bring teams together around a common design. Different prototyping techniques are appropriate for different purposes, from low-fidelity paper prototypes for early exploration to interactive prototypes for evaluating flows and transitions. Good prototypes put readers in the user's perspective, have an appropriate level of investment, can evolve over time, communicate the right level of detail, and are accessible to different teams. Prototyping is presented as a core part of an iterative design process.
This document discusses different types of prototypes and their uses. It begins by defining prototypes as ways to identify problems, try out ideas, identify issues, and bring teams together. It then describes different types of prototypes from static to interactive, and their appropriate uses. Key advantages discussed include validating concepts, exploring options quickly, and assessing application flow before production. The document emphasizes that good prototypes put the user first, have appropriate investment, communicate the right level of detail, and are changeable, accessible and help align teams. Overall it promotes prototyping as an important part of the design process.
1. The document discusses how the complexity of information design on webpages can affect cognition, emotion, and usability.
2. Eye tracking research shows that pages with medium complexity lead to more efficient processing, as seen in longer fixation durations and shorter saccade amplitudes.
3. Medium complexity pages are perceived as less effortful to process and lead to more positive evaluations, likely due to the more fluent processing they enable compared to lower or higher complexity pages.
Cognitive intersections: Meeting Narrative, Semiotics, and Neuroscience in Vi...Cody Mejeur
Presentation for International Narrative 2016 conference on proposed work bringing together narrative, semiotics, and cognitive neuroscience to study game narrative.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
2. Daniel Moody
The “Physics” of Notations:
Towards a Scientific Basis for Constructing
Visual Notations in Software Engineering,
IEEE Transactions on Software Engineering,
Best Practices
Vol. 35, No. 5, November-December 2009
[1] Alexander, C.W., Notes On The Synthesis Of Form.
1970, Boston, US: Harvard University Press.
Worst practices
[2] Avison, D.E. and G. Fitzgerald, Information
Systems Development: Methodologies, Techniques
and Tools (3rd edition). 2003, Oxford, United
Kingdom: Blackwell Scientific.
.......
[150] Zhang, J., The Nature of External
Representations in Problem Solving. Cognitive
Science, 1997. 21(2): p. 179-217.
[151] Zhang, J. and D.A. Norman, Representations in
Distributed Cognitive Tasks. Cognitive Science, 1994.
3. Worst Practices for Domain-Specific Modeling
Steven Kelly, Risto Pohjonen
IEEE Software, vol. 26, no. 4, pp. 22-29, July/Aug. 2009
Free from: www.metacase.com/stevek.html
• 76 DSM cases
• 15 years
• 4 continents
• several tools
• 100 DSL creators
Worst practices
• 3–300 modelers
what doesn’t work
4. Tool: hammer nails
Worst practices: Concept Source
Tool’s technical limitations
dictate language 14%
15. Does UML pop?
Moody SLE08
Line value End shape End value
Open Closed Semi-
Relationship Solid Dashed Diamond arrow Cross arrow Circle circle None Black White
Aggregation
Association (navigable)
Association (non-navigable)
Association class
Composition
Brain Power
Constraint
Dependency
Generalisation
Generalisation set
Interface (provided)
Interface (required)
N-ary association
Note reference
Package
Package merge
Package import (public)
Package import (private)
Realization
Substitution
Usage
18. Domain Users
care deeply
Notation, …
about notation!
“UI” for the language
19. Moody TSE09
Form ≥ Content
Brain Power
“ research in diagrammatic reasoning shows that the
form of representations has an equal, if not greater,
influence on cognitive effectiveness as their content
[68, 122, 151].
”
20. Moody TSE09
Concrete ≥ Abstract
Brain Power
syntax syntax
“
apparently minor changes in visual appearance
can have dramatic impacts on understanding and
problem solving performance [19, 68, 103, 122]...
especially by novices [53, 56, 57, 79, 93, 106, 107].
”
42. ?
Graphical vs. Textual
C.A.R. Hoare:Hints on Programming Language Design
- transparency of meaning
- independence of parts
- recursive application
- narrow interfaces
- manifestness of structure
- locality and scope
- procedures and parameters
50. Medical Mixing Machine
“take from the second cup 01 move(-3); filt(1); suck(5);
5 units with filter A and 02 move(4); filt(0); blow(2);
put 2 units to cup 6 03 move(1); blow(3);
and 3 units to cup 7 and 04 move(-3); suck(30);
then clean the needle” 05 move(1); blow(30);
51. Version 0: Unitype Modelling Language
Straight mapping of text DSL to graphical
move(-3);
filt(1);
suck(5);
move(4);
filt(0);
blow(2);
move(1);
blow(3);
move(-3);
suck(30);
move(1);
blow(30);
52. Worst practices: Concept Source
Too generic/specific
Too few/generic 21%
Too many/specific 8%
Language for 1 model 7%
68. language
Best integration
^
Model Integration
=
No integration
69. Moody TSE09
Decomposition
Model Integration
… Logical structure
… Sub-models
… Max 20 elements each
… Split 7±2 ways / level
70. Moody TSE09
Summary Model
Model Integration
… Top-level overview
… Shows all sub-models
… Shows sub-model links
71. Moody TSE09
Side-by-side view
Model Integration
… 2+ models on screen
… Reduces memory load
… User chooses
… User positions
72. Moody TSE09
Cross-model links
Model Integration
… Show referred objects
… Real object or pointer
… Use sparingly: coupling
73. Version 2: Logical grouping
• Collect logical groups of code into visual chunks
– Cf. commented code regions, GOTO
move(-3);
filt(1);
suck(5);
}
move(4);
filt(0);
blow(2);
}
move(1);
blow(3); }
move(-3);
suck(30);
move(1);
blow(30);
}
74. Version 3: Support model reuse
• Cf. GOSUB, functions
Sub-model X
decomposition
75.
76. Integration Paradigm 1:
String matching in files
Model Integration
• Strings are 1-dimensional character arrays
• Look for same sequence, “E”, “m”, “p” etc.
– Or UUID, unique identifier in XML
• Inefficient, hard to see, fragile
– but familiar!
c l a s s E m p l o y e e
. . . c l a s s Ma
n a g e r e x t e n d s
Em p l o y e e . . .
D e v e l o p e r e x t e
n d s Emp l o y e e
77. Integration Paradigm 2:
Direct reference in repository
Model Integration
• Works like objects in memory
• Efficient: Direct pointer
• Visible: See referrers
• Robust: Change once
– But less familiar!
Employee
Manager Developer
78. String matching
c l a s s E m p l o y e e
. . . c l a s s Ma
Model Integration
n a g e r e x t e n d s
Em p l o y e e . . .
D e v e l o p e r e x t e
n d s Emp l o y e e
Direct reference
Employee
Manager Developer
79. Integration Paradigms:
Tool support for direct reference
Model Integration
• Concrete syntax: view
• UI: edit
• Cross-model references: link
• Disk representation: load
view edit link load
XText
Xtext
EMF/GMF
DSL Tools
MPS
MetaEdit+
80. Integration Paradigms:
Summary
Model Integration
• We need both!
– But tools often only offer strings
• Use direct references whenever possible
– Make most important references visible
• Use string matching if you need indirection
– Deliberately break into exchangeable modules
83. Single language,
Multiple notations: Example
• Mobile apps
• UI display
• Control flow
Viewpoints
84. Single language,
Different notation
• Metamodeler
offers choice of
concrete syntax
Viewpoints
• No extra work for
modeler
85. Single language,
Different tool behaviour
• View & edit only what is relevant / allowed
• Generally UI for one user is subset of other
– No extra work for modeler
Viewpoints
86. Single language,
Different notation types
Tool supports multiple editors
on same underlying model
No extra work for metamodeler
(with good tools)
Viewpoints
Modeler adds layout
87. Ignoring
real-life process
Wosrst practice: In Use
of using language
42%
89. modeling != coding
Same old problems
but new material
Building together
Old solutions don’t apply
... new processes
... new tools
90. modeling != coding
Diff + merge:
Text easy, graphs hard
Building together
Multi-user editing:
Text hard, graphs easy
Mature Model Management, #cg2011
91.
92. Notation literature
• Blackwell, A., Metaphor in diagrams, Ph.D. Thesis, University of Cambridge,
September 1998. www.cl.cam.ac.uk/~afb21/
• Hoare, C.A.R., Hints on Programming Language Design, Stanford AI Lab, MEMO
AIM-224, http://www.cs.berkeley.edu/~necula/cs263/handouts/hoarehints.pdf
• Kelly, S., Tolvanen, J-P., Domain-Specific Modeling, http://dsmbook.com
• Miller, George A., The Magical Number Seven, Plus or Minus Two Psychological
Review, 63, 81-97. 1956, psychclassics.yorku.ca/Miller/
• Moody, Daniel, van Hillegersberg, Jos, Evaluating the Visual Syntax of UML, D.
Gašević, R. Lämmel, and E. Van Wyk (Eds.): SLE 2008, LNCS 5452, pp. 16–34,
Springer-Verlag Berlin Heidelberg 2009,
http://books.google.fi/books?id=mFy3MXJKLBgC&pg=PA16
• Moody, Daniel, The “Physics” of Notations: Towards a Scientific Basis for
Constructing Visual Notations in Software Engineering, IEEE Transactions on
Software Engineering, Vol. 35, No. 5, November-December 2009,
http://www.ajilon.com.au/en-AU/news/Documents/News_PDFs/
100528_Dr_Daniel_Moody_Software_Engineering_Keynote.pdf
93.
94. Version 4: Higher level domain concepts
• Make reusable chunks into types
– Give types properties to parameterize reuse
• From:
• To:
Take Put Clean
95. Version 4: Reqs Model Code
“take from the put 2 put 3 then clean
second cup 5 units to units to the needle”
units with filter A cup 6 cup 7
01 move(-3); filt(1); suck(5);
02 move(4); filt(0); blow(2);
03 move(1); blow(3);
04 move(-3); suck(30);
05 move(1); blow(30);
96. Modeling effort?
17 objects 5 objects
12 relationships 4 relationship
17 properties 7 properties
46 elements in total 16 elements in total
97. Modeling effort?
17 objects 2 objects
12 relationships 2 relationships
17 properties 3 properties
46 elements in total 7 elements in total