Miniature electronics and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Who was here before us, and how can we best learn from them? These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past. This talk will focus on trends in wearable computing and VR as it developed from the 1960s to now, and then into the future. This talk will cover various topics on the history and future of wearables.
We'll learn about Ivan Sutherland, human augmentation, infrastructure, machine vision, processing, distributed computing and wireless data transfer, a church dedicated to VR, computer backpacks, heads up displays, reality editing, job simulators and unexplored realms of experience that haven't yet come to life. We'll also learn about the road from virtual reality to augmented reality and what we need to build to get there. This talk is for anyone interested in how we can add a new layer of interactivity to our world and how we can take the next steps to get there.
Speech given at AR in Action 2017 at MIT Media Lab on 17 Jan 2017.
Miniature electronics and and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Designing Calm Technology: Design for the Next Generation of Devices Amber Case
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers can make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
Workshop on Designing Calm Technology at UX LondonAmber Case
The difference between an annoying technology and one that is helpful is how it engages our attention. Calm Technology is a framework for designing ubiquitous devices that engage our attention in an appropriate manner. The aim of Calm Technology is to provide principles that follow the human lifestyle and environment in mind, allowing technology to amplify humanness instead of taking it away.
This workshop will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
--Intended Audience--
This workshop is for anyone that actively builds or makes decisions about technology, especially user experience designers, product designers, managers, creative directors and developers. Attendees are encouraged to have some background in user experience design and look at http://calmtech.com/ or Designing Calm Technology before the workshop.
--Structure and Activities--
Students will work in groups to solve a series of design challenges, including designing new products, ‘calming down’ a complex ones, communicating the principles of Calm Technology across an organization and team, and entering a product successfully into the marketplace.
--You’ll learn how to--
- Use principles of Calm Technology to design the next generation of connected devices.
- Design appropriate notification systems into both physical and software products
- Communicate the principles of Calm Technology to your across your organization and team
- Use methods of Calm Technology to design technology for generations, not seasons.
- Enter your product successfully into the marketplace.
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
Talk originally given at NEXT2018 in Hamburg, Germany.
The difference between an annoying technology and one that is helpful is how it engages our attention. Calm Technology is a framework for designing ubiquitous devices that engage our attention in an appropriate manner. The aim of Calm Technology is to provide principles that follow the human lifestyle and environment in mind, allowing technology to amplify humanness instead of taking it away.
The terms Calm Computing and Calm Technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
This workshop covers how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
- Use principles of Calm Technology to design the next generation of connected devices.
- Design appropriate notification systems into both physical and software products
- Communicate the principles of Calm Technology to your across your organization and team
- Use methods of Calm Technology to design technology for generations, not seasons.
Who is the workshop for?
This workshop is for anyone that actively builds or makes decisions about technology, especially user experience designers, product designers, managers, creative directors and developers. Attendees are encouraged to have some background in user experience design and look at http://calmtech.com/ or Designing Calm Technology before the workshop.
The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
Calm Technology | Inbound 2015 Bold TalkAmber Case
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human.
The idea behind Calm Technology is to have smarter people, not things. How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers can make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
---
These are slides from INBOUND's conference Sept 9, 2015 in Boston, MA.
Speech given at AR in Action 2017 at MIT Media Lab on 17 Jan 2017.
Miniature electronics and and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Designing Calm Technology: Design for the Next Generation of Devices Amber Case
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers can make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
Workshop on Designing Calm Technology at UX LondonAmber Case
The difference between an annoying technology and one that is helpful is how it engages our attention. Calm Technology is a framework for designing ubiquitous devices that engage our attention in an appropriate manner. The aim of Calm Technology is to provide principles that follow the human lifestyle and environment in mind, allowing technology to amplify humanness instead of taking it away.
This workshop will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
--Intended Audience--
This workshop is for anyone that actively builds or makes decisions about technology, especially user experience designers, product designers, managers, creative directors and developers. Attendees are encouraged to have some background in user experience design and look at http://calmtech.com/ or Designing Calm Technology before the workshop.
--Structure and Activities--
Students will work in groups to solve a series of design challenges, including designing new products, ‘calming down’ a complex ones, communicating the principles of Calm Technology across an organization and team, and entering a product successfully into the marketplace.
--You’ll learn how to--
- Use principles of Calm Technology to design the next generation of connected devices.
- Design appropriate notification systems into both physical and software products
- Communicate the principles of Calm Technology to your across your organization and team
- Use methods of Calm Technology to design technology for generations, not seasons.
- Enter your product successfully into the marketplace.
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
Talk originally given at NEXT2018 in Hamburg, Germany.
The difference between an annoying technology and one that is helpful is how it engages our attention. Calm Technology is a framework for designing ubiquitous devices that engage our attention in an appropriate manner. The aim of Calm Technology is to provide principles that follow the human lifestyle and environment in mind, allowing technology to amplify humanness instead of taking it away.
The terms Calm Computing and Calm Technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
This workshop covers how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
- Use principles of Calm Technology to design the next generation of connected devices.
- Design appropriate notification systems into both physical and software products
- Communicate the principles of Calm Technology to your across your organization and team
- Use methods of Calm Technology to design technology for generations, not seasons.
Who is the workshop for?
This workshop is for anyone that actively builds or makes decisions about technology, especially user experience designers, product designers, managers, creative directors and developers. Attendees are encouraged to have some background in user experience design and look at http://calmtech.com/ or Designing Calm Technology before the workshop.
The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
Calm Technology | Inbound 2015 Bold TalkAmber Case
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. Technology shouldn’t require all of our attention, just some of it, and only when necessary.
The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating. Calm technology describes a state of technological maturity where a user’s primary task is not computing, but being human.
The idea behind Calm Technology is to have smarter people, not things. How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers can make apps “ambient” while respecting privacy and security? This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We’ll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
---
These are slides from INBOUND's conference Sept 9, 2015 in Boston, MA.
Google Glass and the Future of Wearable ComputingAmber Case
Google will release a wearable heads up display this fall, and it may help to usher in a new era of augmented reality and wearable computing. What does this mean for us as designers and developers? How do we build for the next generation of computers? Who was here before us, and how can we learn from them?
From it’s birthplace at MIT and PARC research, the field of wearable computing has focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. Mann didn’t like the idea of crouching over a desktop computer. He instead felt that the computer should contort to the human naturally, so he began his own wearable computing mission.
This talk will focus on trends in wearable computing starting from the 1970’s-2010’s. I’ll cover various HUDs (heads up displays), new tech from Motorola, Google, various invasive and non-invasive tech and how mobile interfaces should take advantage of location, proximity and haptics to help improve our lives instead of get in the way. These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past.
Speech given at OSBridge 2012 by Amber Case: http://opensourcebridge.org/sessions/857
Trendcasting for 2018 what will the future of tech holdBrian Pichman
Join Brian Pichman of the Evolve Project as he highlights this year's biggest technology trends and what it means for 2018. What changes are on the horizon? What technologies should we hold out for? From drones to virtual/augmented reality, to creating, to innovation. Find out what is on the cusp and what will be the biggest trends of 2018.
This presentation was given by Amber Case of Healthways at Delight 2015 on Oct. 5, 2015.
Technology shouldn't require all of our attention, just some of it, and only when necessary. Calm technology describes a state of technological maturity where a user's primary task is not computing, but being human. The idea is to have smarter people, not things. Amber will cover how to use principles of Calm Technology in product design and how we must to manage the next generation of connected devices in our human landscape.
http://delight.us/conference
At any given moment it is easy to look back to see how technology has changed over time. At the same time it is difficult to see what transformations are taking place in current moment, and even more difficult to see where things are going.
We will explore what technology is. For us it may be the latest tech stuff we see, something new. But what about everyday objects that we take for granted. Are those not technologies also?
How does technology evolve? We look at some ideas on evolution of technology and how it is similar to biology in some ways. We will also look at the origin of the word technology. Finally we will define the term we will use in the course. Terms defined are technology, product performance, and innovation to name few.
At any given time, with all the knowledge we have, new knowledge can emerge. We call this the adjacent possible. It explains why new inventions are invented when they are, and why they are not possible before. Adjacent possible is a very useful term to understand the progress of technology. Technology evolves by using prevailing technologies to improve upon. Thus technology is combinatorial and built in layers. With each layer new ideas can be built upon the previous layers. Thus Gall´s Law says that any complex system that works is built of simpler systems that work.
We will look at the adjacent possible and some ideas that came when all the enabling technologies are available. We also look at an idea that was not possible to build at the time, Charles Babbage engines.
Did you know that the term "Computer" once meant a profession? And what did people or computers actually do? They computed mathematical problems. Some problems were tedious and error prone. And it is not surprising that people started to develop machines to aid in the effort. The first mechanical computers were actually created to get rid of errors in human computation. Then came tabulating machines and cash registers. It was not until telephone companies were well established that computing machines became practical.
First computers were huge mainframes, but soon minicomputers like DEC’s PDP started to appear. The transistor was introduced in 1947, but its usefulness was not truly realized until in 1958 when the integrated circuit was invented. This led to the invention of the microprocessor. Intel, in 1971, marketed the 4004 – and the personal computer revolution started. One of the first Personal Computers was MITS’ Altair. This was a simple device and soon others saw the opportunities.
In this lecture we start our coverage of computing and look at some of the early machines and the impact they had.
0507 057 01 98 * Adana Sarıçam Klima Servisleri, Adana Sarıçam Klima Servisi, Adana Sarıçam Klima Servisleri, Arçelik Klima Servisi Adana Sarıçam, Beko Klima Servisi Adana Sarıçam, Demirdöküm Klima Servisi Adana Sarıçam, Vestel Klima Servisi Adana Sarıçam, Aeg Klima Servisi Adana Sarıçam, Bosch Klima Servisi Adana Sarıçam, Ariston Klima Servisi Adana Sarıçam, Samsung Klima Servisi Adana Sarıçam, Siemens Klima Servisi Adana Sarıçam, Profilo Klima Servisi Adana Sarıçam, Fujitsu Klima Servisi Adana Sarıçam, Baymak Klima Servisi Adana Sarıçam, Sharp Klima Servisi Adana Sarıçam, Mitsubishi Klima Servisi Adana Sarıçam, Alaska Klima Servisi Adana Sarıçam, Aura Klima Servisi Adana Sarıçam, Adana Çukurova Klima Servisleri, Alarko Klima Servisi Adana Sarıçam
At any given time, with all the knowledge we have, new knowledge can emerge. We call this the adjacent possible. It explains why new inventions are invented when they are, and why they are not possible before. Adjacent possible is a very useful term to understand the progress of technology. Technology evolves by using prevailing technologies to improve upon. Thus technology is combinatorial and built in layers. With each layer new ideas can be built upon the previous layers. Thus Gall´s Law says that any complex system that works is built of simpler systems that work.
We will look at the adjacent possible and some ideas that came when all the enabling technologies are available. We also look at an idea that was not possible to build at the time, Charles Babbage engines.
Slides for September 26th Internet of Things Webinar I ran for RS to kick off their new Internet of Things Design Centre we contributed content to. bit.ly/IOT-Webinar
Mind the Gap: Designing the Space Between DevicesJosh Clark
There's untapped magic in the gaps between gadgets. Multi-screen design is a preoccupying problem as we try to fit our content into many different screens. But as devices multiply, the new opportunity is less about designing individual screens but designing interactions BETWEEN them—often without using a screen at all. Learn to create web and app experiences that share control among multiple devices, designing not only for screens but for sensors. The technology is already here in our pockets, handbags, and living rooms. Learn how to use it right now.
MCE^3 - Jonathan Flint - What I Cannot Create, I Do Not UnderstandPROIDEA
My talk will cover our process through two different case studies which touch on speculative design, strategy and making, through a series of built and deployed products. The first case study will look at the Drone Aviary project exploring the social, political and cultural potential of drone technology as it enters civil space. It will also examine how we have been communicating the project to different audiences in the form of inventive publications and workshops. And the next case study will look at BuggyAir (part of our bigger Internet of things platform called IOTA) an accurate mobile sensing kit that helps parents understand their children's exposure to air pollution. The title of the talk is from a quote by Richard Feynman it resonates with me because I am making and designing artefacts in the studio that have to be intuitive enough for any audience to understand. But also we find ourselves in the studio trying to work something out; from deciding what flight system to use for a quadcopter, to fabrication techniques. To do this we often ask ourselves, do we really have a real understanding of each step of the process?
2022 Calm Technology | Designing Human Out.pptxAmber Case
Our world is made of information that competes for our attention.
What is necessary? What is not?
When we design products, we aim to choose the best position for user interface components, placing the most important ones in the most accessible places on the screen.
Equally important is the design of communication. How many are notifications are necessary? How and when should they be displayed? To answer this, we can be inspired by the principles of calm technology.
Principles of Calm Technology
Technology should require the smallest possible amount of attention
Technology can communicate, but doesn’t need to speak.
Create ambient awareness through different senses.
Communicate information without taking the user out of their environment or task.
Technology should inform and create calm
A person's primary task should not be computing, but being human.
Give people what they need to solve their problem, and nothing more.
Technology should make use of the periphery
A calm technology will move easily from the periphery of our attention, to the center, and back.
The periphery is informing without overburdening.
Technology should amplify the best of technology and the best of humanity
Design for people first.
Machines shouldn't act like humans.
Humans shouldn't act like machines.
Amplify the best part of each.
Technology can communicate, but doesn’t need to speak
Does your product need to rely on voice, or can it use a different communication method?
Consider how your technology communicates status.
Technology should work even when it fails
Think about what happens if your technology fails.
Does it default to a usable state or does it break down completely?
The right amount of technology is the minimum needed to solve the problem
What is the minimum amount of technology needed to solve the problem?
Slim the feature set down so that the product does what it needs to do and no more.
Technology should respect social norms
Technology takes time to introduce to humanity.
What social norms exist that your technology might violate or cause stress on?
Slowly introduce features so that people have time to get accustomed to the product.
Webvisions NY 2012 - The Future is Now: Ambient Location and the Future of th...Amber Case
Wouldn't it be nice if your colleague's phone could SMS its location to you? If you know position and velocity, you know when they'll arrive. The result: the interface disappears. No redundant actions or queries. The same software could turn your lights on as you approach the house. Or automatically "check in" to certain locations for you. Or leave a note for yourself the next time you're at the store.
In the presentation, Geoloqi founder Amber Case will highlight why developers of apps should look at what users want to do now, as well as what users want to do in the future, why social apps should try to mirror real-world relationships, why sharing should be about who you share with as well as how long you're sharing, and why developers should think about how to make apps "ambient" and require less user interaction.
Miniature electronics and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Who was here before us, and how can we best learn from them? These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past. This talk will focus on trends in wearable computing and VR as it developed from the 1960s to now, and then into the future. This talk will cover various topics on the history and future of wearables. We’ll learn about Ivan Sutherland, human augmentation, infrastructure, machine vision, processing, distributed computing and wireless data transfer, a church dedicated to VR, computer backpacks, heads up displays, reality editing, job simulators and unexplored realms of experience that haven’t yet come to life. We’ll also learn about the road from virtual reality to augmented reality and what we need to build to get there. This talk is for anyone interested in how we can add a new layer of interactivity to our world and how we can take the next steps to get there.
Google Glass and the Future of Wearable ComputingAmber Case
Google will release a wearable heads up display this fall, and it may help to usher in a new era of augmented reality and wearable computing. What does this mean for us as designers and developers? How do we build for the next generation of computers? Who was here before us, and how can we learn from them?
From it’s birthplace at MIT and PARC research, the field of wearable computing has focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. Mann didn’t like the idea of crouching over a desktop computer. He instead felt that the computer should contort to the human naturally, so he began his own wearable computing mission.
This talk will focus on trends in wearable computing starting from the 1970’s-2010’s. I’ll cover various HUDs (heads up displays), new tech from Motorola, Google, various invasive and non-invasive tech and how mobile interfaces should take advantage of location, proximity and haptics to help improve our lives instead of get in the way. These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past.
Speech given at OSBridge 2012 by Amber Case: http://opensourcebridge.org/sessions/857
Trendcasting for 2018 what will the future of tech holdBrian Pichman
Join Brian Pichman of the Evolve Project as he highlights this year's biggest technology trends and what it means for 2018. What changes are on the horizon? What technologies should we hold out for? From drones to virtual/augmented reality, to creating, to innovation. Find out what is on the cusp and what will be the biggest trends of 2018.
This presentation was given by Amber Case of Healthways at Delight 2015 on Oct. 5, 2015.
Technology shouldn't require all of our attention, just some of it, and only when necessary. Calm technology describes a state of technological maturity where a user's primary task is not computing, but being human. The idea is to have smarter people, not things. Amber will cover how to use principles of Calm Technology in product design and how we must to manage the next generation of connected devices in our human landscape.
http://delight.us/conference
At any given moment it is easy to look back to see how technology has changed over time. At the same time it is difficult to see what transformations are taking place in current moment, and even more difficult to see where things are going.
We will explore what technology is. For us it may be the latest tech stuff we see, something new. But what about everyday objects that we take for granted. Are those not technologies also?
How does technology evolve? We look at some ideas on evolution of technology and how it is similar to biology in some ways. We will also look at the origin of the word technology. Finally we will define the term we will use in the course. Terms defined are technology, product performance, and innovation to name few.
At any given time, with all the knowledge we have, new knowledge can emerge. We call this the adjacent possible. It explains why new inventions are invented when they are, and why they are not possible before. Adjacent possible is a very useful term to understand the progress of technology. Technology evolves by using prevailing technologies to improve upon. Thus technology is combinatorial and built in layers. With each layer new ideas can be built upon the previous layers. Thus Gall´s Law says that any complex system that works is built of simpler systems that work.
We will look at the adjacent possible and some ideas that came when all the enabling technologies are available. We also look at an idea that was not possible to build at the time, Charles Babbage engines.
Did you know that the term "Computer" once meant a profession? And what did people or computers actually do? They computed mathematical problems. Some problems were tedious and error prone. And it is not surprising that people started to develop machines to aid in the effort. The first mechanical computers were actually created to get rid of errors in human computation. Then came tabulating machines and cash registers. It was not until telephone companies were well established that computing machines became practical.
First computers were huge mainframes, but soon minicomputers like DEC’s PDP started to appear. The transistor was introduced in 1947, but its usefulness was not truly realized until in 1958 when the integrated circuit was invented. This led to the invention of the microprocessor. Intel, in 1971, marketed the 4004 – and the personal computer revolution started. One of the first Personal Computers was MITS’ Altair. This was a simple device and soon others saw the opportunities.
In this lecture we start our coverage of computing and look at some of the early machines and the impact they had.
0507 057 01 98 * Adana Sarıçam Klima Servisleri, Adana Sarıçam Klima Servisi, Adana Sarıçam Klima Servisleri, Arçelik Klima Servisi Adana Sarıçam, Beko Klima Servisi Adana Sarıçam, Demirdöküm Klima Servisi Adana Sarıçam, Vestel Klima Servisi Adana Sarıçam, Aeg Klima Servisi Adana Sarıçam, Bosch Klima Servisi Adana Sarıçam, Ariston Klima Servisi Adana Sarıçam, Samsung Klima Servisi Adana Sarıçam, Siemens Klima Servisi Adana Sarıçam, Profilo Klima Servisi Adana Sarıçam, Fujitsu Klima Servisi Adana Sarıçam, Baymak Klima Servisi Adana Sarıçam, Sharp Klima Servisi Adana Sarıçam, Mitsubishi Klima Servisi Adana Sarıçam, Alaska Klima Servisi Adana Sarıçam, Aura Klima Servisi Adana Sarıçam, Adana Çukurova Klima Servisleri, Alarko Klima Servisi Adana Sarıçam
At any given time, with all the knowledge we have, new knowledge can emerge. We call this the adjacent possible. It explains why new inventions are invented when they are, and why they are not possible before. Adjacent possible is a very useful term to understand the progress of technology. Technology evolves by using prevailing technologies to improve upon. Thus technology is combinatorial and built in layers. With each layer new ideas can be built upon the previous layers. Thus Gall´s Law says that any complex system that works is built of simpler systems that work.
We will look at the adjacent possible and some ideas that came when all the enabling technologies are available. We also look at an idea that was not possible to build at the time, Charles Babbage engines.
Slides for September 26th Internet of Things Webinar I ran for RS to kick off their new Internet of Things Design Centre we contributed content to. bit.ly/IOT-Webinar
Mind the Gap: Designing the Space Between DevicesJosh Clark
There's untapped magic in the gaps between gadgets. Multi-screen design is a preoccupying problem as we try to fit our content into many different screens. But as devices multiply, the new opportunity is less about designing individual screens but designing interactions BETWEEN them—often without using a screen at all. Learn to create web and app experiences that share control among multiple devices, designing not only for screens but for sensors. The technology is already here in our pockets, handbags, and living rooms. Learn how to use it right now.
MCE^3 - Jonathan Flint - What I Cannot Create, I Do Not UnderstandPROIDEA
My talk will cover our process through two different case studies which touch on speculative design, strategy and making, through a series of built and deployed products. The first case study will look at the Drone Aviary project exploring the social, political and cultural potential of drone technology as it enters civil space. It will also examine how we have been communicating the project to different audiences in the form of inventive publications and workshops. And the next case study will look at BuggyAir (part of our bigger Internet of things platform called IOTA) an accurate mobile sensing kit that helps parents understand their children's exposure to air pollution. The title of the talk is from a quote by Richard Feynman it resonates with me because I am making and designing artefacts in the studio that have to be intuitive enough for any audience to understand. But also we find ourselves in the studio trying to work something out; from deciding what flight system to use for a quadcopter, to fabrication techniques. To do this we often ask ourselves, do we really have a real understanding of each step of the process?
2022 Calm Technology | Designing Human Out.pptxAmber Case
Our world is made of information that competes for our attention.
What is necessary? What is not?
When we design products, we aim to choose the best position for user interface components, placing the most important ones in the most accessible places on the screen.
Equally important is the design of communication. How many are notifications are necessary? How and when should they be displayed? To answer this, we can be inspired by the principles of calm technology.
Principles of Calm Technology
Technology should require the smallest possible amount of attention
Technology can communicate, but doesn’t need to speak.
Create ambient awareness through different senses.
Communicate information without taking the user out of their environment or task.
Technology should inform and create calm
A person's primary task should not be computing, but being human.
Give people what they need to solve their problem, and nothing more.
Technology should make use of the periphery
A calm technology will move easily from the periphery of our attention, to the center, and back.
The periphery is informing without overburdening.
Technology should amplify the best of technology and the best of humanity
Design for people first.
Machines shouldn't act like humans.
Humans shouldn't act like machines.
Amplify the best part of each.
Technology can communicate, but doesn’t need to speak
Does your product need to rely on voice, or can it use a different communication method?
Consider how your technology communicates status.
Technology should work even when it fails
Think about what happens if your technology fails.
Does it default to a usable state or does it break down completely?
The right amount of technology is the minimum needed to solve the problem
What is the minimum amount of technology needed to solve the problem?
Slim the feature set down so that the product does what it needs to do and no more.
Technology should respect social norms
Technology takes time to introduce to humanity.
What social norms exist that your technology might violate or cause stress on?
Slowly introduce features so that people have time to get accustomed to the product.
Webvisions NY 2012 - The Future is Now: Ambient Location and the Future of th...Amber Case
Wouldn't it be nice if your colleague's phone could SMS its location to you? If you know position and velocity, you know when they'll arrive. The result: the interface disappears. No redundant actions or queries. The same software could turn your lights on as you approach the house. Or automatically "check in" to certain locations for you. Or leave a note for yourself the next time you're at the store.
In the presentation, Geoloqi founder Amber Case will highlight why developers of apps should look at what users want to do now, as well as what users want to do in the future, why social apps should try to mirror real-world relationships, why sharing should be about who you share with as well as how long you're sharing, and why developers should think about how to make apps "ambient" and require less user interaction.
Miniature electronics and global supply chains have us on the cusp of a new era of human experience. Early forms of wearable computing focused on augmenting the human ability to compute freely. As pioneer Steve Mann and calm technology pioneer Mark Weiser wanted, “to free the human to not act as a machine”. What does this mean for us as designers and developers, and how can we build interfaces for the next generation of devices?
Who was here before us, and how can we best learn from them? These are the machines that will be a part of our lives in only a few years from now, and the best way to learn about the future is to dig into the past. This talk will focus on trends in wearable computing and VR as it developed from the 1960s to now, and then into the future. This talk will cover various topics on the history and future of wearables. We’ll learn about Ivan Sutherland, human augmentation, infrastructure, machine vision, processing, distributed computing and wireless data transfer, a church dedicated to VR, computer backpacks, heads up displays, reality editing, job simulators and unexplored realms of experience that haven’t yet come to life. We’ll also learn about the road from virtual reality to augmented reality and what we need to build to get there. This talk is for anyone interested in how we can add a new layer of interactivity to our world and how we can take the next steps to get there.
Given at MCEConference | Warsaw, Poland
Our world is made of information that competes for our attention. What is needed? What is not? We cannot interact with our everyday life in the same way we interact with a desktop computer. The terms calm computing and calm technology were coined in 1995 by PARC Researchers Mark Weiser and John Seely Brown in reaction to the increasing complexities that information technologies were creating.
Calm technology describes a state of technological maturity where a user's primary task is not computing, but being human. The idea behind Calm Technology is to have smarter people, not things. Technology shouldn't require all of our attention, just some of it, and only when necessary.
How can our devices take advantage of location, proximity and haptics to help improve our lives instead of get in the way? How can designers can make apps “ambient” while respecting privacy and security?
This talk will cover how to use principles of Calm Technology to design the next generation of connected devices. We'll look at notification styles, compressing information into other senses, and designing for the least amount of cognitive overhead.
Presentation for #TFT12: Location and the Future of the Interface
In this presentation, Geoloqi founder Amber Case will highlight why developers of apps should look at what users want to do now, as well as what users want to do in the future, why social apps should try to mirror real-world relationships, why sharing should be about who you share with as well as how long you're sharing, and why developers should think about how to make apps "ambient" and require less user interaction.
See Amber's TFT speaker Pinterest board: http://pinterest.com/servicedesk/amber-case/
Cyborg Camp YVR 2013: Amber Case: “From Solid to Liquid to Air: Cyborg Anthro...theholongroup
“From Solid to Liquid to Air: Cyborg Anthropology and the Future of the Interface”
We are now entering into an era of liquid interfaces, where buttons can be downloaded at will, and software flies through the air. Phones have been untethered from their cords and are free to colonize our pockets. They cry, and we must pick them up. They get hungry, and we must plug them in. We increasingly live on interfaces, and it is their quality and design which increases our happiness and our frustration. We are tool using creatures. Prosthetics touch almost every part of our lives. Until recently, humans have used their hands and bodies to interface with objects. Early interfaces were solid and tactile. Now, the interface can be anywhere. The best interfaces compress the time and space it takes to absorb relevant information, and the worst cause us car accidents, lost revenue, and communication failures. This speech will discuss how the field of anthropology can be applied to interface design, and how future interfaces, such as the ones employed by augmented reality, will change the way we act, feel and communicate with one another.
Amber Case is a cyborg anthropologist, examining the way humans and technology interact and evolve together. Like all anthropologists, Case watches people, but her fieldwork involves observing how they participate in digital networks, analyzing the various ways we project our personalities, communicate, work, play, share ideas and even form values. Case founded Geoloqi.com, a private location-sharing application, out of a frustration with existing social protocols around text messaging and wayfinding.
“She’s a digital native. She’s from the future. She’s come back to help us figure out how to think.” – Kris Krug, in Fast Company
Amber Case (Berkman Klein Center for Internet & Society) - Designing with soundStartupfest
Sound is one of the most commonly overlooked components in product design, even though it's often the first way people interact with many products. When designers don’t pay enough attention sound elements, customers are frequently left with annoying and interruptive results. This talk will cover several methods that product designers and managers can use to improve everyday interactions through an understanding and application of sound design.
Phantom Inventory is a complex problem facing the CPG industry. However, now with POS data and store level reports, reps can identify these issues and quickly address them the same day as the alert is sent.
Manlike machines have fascinated humans since ancient times. The modern robots start to take shape with the industrial revolution. In the 20th century robots were mostly industrial machines you would see in factories, like car factories.
Today, robots can have sensors, vision, they can hear and understand. They can connect to the cloud for more information. However, we are still in the early stages of robotics and robots will need to go a long way to become useful as a ubiquitous general purpose devices.
Eric Mattison, Senior Analyst at Vertex Pharmaceuticals and former ABCD W3 co-chair, will explain how the Internet of Things (IoT) is being used to streamline scientific processes, shortening the time-to-market for life-saving drugs. The talk will include:
- What is IoT? Just another buzzword to get budget allocation from C-level executives, or an actual game-changer?
- How we got here: the technologies and economics that make IoT possible
- Implementations, large and small (the small ones are the most interesting)
Bio
Before selling out to almighty Mammon, Eric Mattison was an impoverished journeyman web serf here at Harvard, extolling the virtues of Python, Django and web APIs. Now a Senior Analyst at Vertex Pharmaceuticals, he works to streamline internal business processes using Python, Django and web APIs.
(This presentation occurred on October 11th, 2017)
"Bugünün Teknolojisi ve Korkusuz Makinelerin Yarını!"
"Today's Tech versus Brave Machine's Tomorrow !"
Yarının güvenli geleceğini inşa etmek için ihtiyacımız olan makine felsefesi nedir sizce? Bu konuşmamda Bilgisayar Görüsü'nün kilit taşı olacağı çözümlerden bahsedeceğiz. Günümüz tekniklerini ve bu tekniklerin sınırlarını inceleyeceğiz.
Today's Tech versus Brave Machine's Tomorrow !
What kind of machine philosophy do we need to build safe future of tomorrow? We will talk about proposals of computer vision as future's keystones. By examining today's pain points and limitations we will try to derive tomorrow's technologic boundaries.
Getting things done is different at scale. After Case's company Geoloqi joined Esri in 2012, she grew her division from 6-20 people, and successfully launched two major products in the course of a year. She also managed the transition of the company to Github from Enterprise and spearheaded an effort for more open source projects. This speech will cover what Case learned from managing a team of 6 to managing a team of 20 in an international company of 3,000. It will detail hiring, morale, culture, and translating what you need to do into a language the larger team can understand, and what changes from 2 people to 6, to 20 and more.
Designing for Privacy in Mobile and Web Apps - Interaction '14, AmsterdamAmber Case
Practice privacy by design, not privacy by disaster!
See the talk here: http://caseorganic.com/articles/2014/02/12/1/designing-for-privacy-in-mobile-and-web-apps-at-interaction-14-in-amsterdam
Almost every application requires some gathering of personal data today. Where that data is stored, who has access to it, and what is done with that data later on is becoming increasingly important as more and more of our data lives online today. Privacy disasters are costly and can be devastating to a company. UX designers and developers need to have a framework for protecting user data, communicating it to users, and making sure that the entire process is smoothly handled.
This talk covers best practices for designing web and mobile apps with the privacy of individual users in mind. Privacy has been an even bigger issue with location-based apps, and we ran into it head-first when we began work on Geoloqi (now part of Esri). Designing an interface that made one's personal empowering instead of creepy was our goal. The stories from our design decisions with our application will also be included in this talk.
Brand Engagement and the Future of the InterfaceAmber Case
This was an in-depth talk on the future of technology, brand engagement. It focused on the next generation of the interface – discussing calm technology, mobile and sensor technology (location, triggers, buttons) and the future of sharing.
The talk was given at SAY:CREATE 2012 in Carmel, California on Tuesday, Sept 11, 2012.
Meditation and the Modern Cyborg - BGeeks Conference Keynote, Boulder, ColoradoAmber Case
Amber Case had trouble sleeping as far back as she can remember. When she was 4, she decided to do something about it. It involved thinking of her brain as a computer and manually shutting it down.
This talk covers various aspects of what it is like to be a connected human, the effect of connectivity on the brain and the need for digital downtime as well as the history and future of our increasing relationship with technology.
Future of Location - Street Fight Summit 2012Amber Case
Amber Case is the founder of Geoloqi, Inc., a company bringing the future of location to the world. She’s spoken at TED and around the world, and has been featured in Forbes, Fast Company, WIRED and more.
http://stories.dlvr.it/story/98564-streetfight
Frontiers of Interaction '11 Speech. Florence, ItalyAmber Case
We are now entering into an era of liquid interfaces, where buttons can be downloaded at will, and software flies through the air. Phones have been untethered from their cords and are free to colonize our pockets. They cry, and we must pick them up. They get hungry, and we must plug them in. We increasingly live on interfaces, and it is their quality and design which increases our happiness and our frustration.
We are tool using creatures. Prosthetics touch almost every part of our lives. Until recently, humans have used their hands and bodies to interface with objects. Early interfaces were solid and tactile. Now, the interface can be anywhere. The best interfaces compress the time and space it takes to absorb relevant information, and the worst cause us car accidents, lost revenue, and communication failures.
This speech will discuss how the field of anthropology can be applied to interface design, and how future interfaces, such as the ones employed by augmented reality, will change the way we act, feel and communicate with one another. Topics will include non-places, time and space compression, privacy, user flow, supermodernity, wearable computing, work and play, gaming, history and prosthetic culture.
Location as Invisible Interface - ARE2011 PresentationAmber Case
The best interfaces are invisible. They should get out of the way and help you live your life.
This presentation discusses ambient applications, multiple sensory inputs and a history of heavy-weight contextual reality applications. It starts with Steve Mann, who believed that computers should be wearable, and who was obsessed with the idea of creating a custom reality based on his personal preferences.
The second part of this presentation talk about how we're building subscription-based reality and contextual notification systems on top of Geoloqi, how non-visual augmented reality is replacing interactions with the phone with interactions with the world, and real-time location-based gaming.
The Future is Now - PopTech Marketing Event March 8thAmber Case
Today we’re all carrying around not phones in our pockets, but sensors. These sensors are capable of processing information, and taking pictures, as well as knowing where we are and how fast we’re moving, These sensors used to cost thousands of dollars and weigh tens of pounds. Now they’re available to everyone.
This presentation will cover a history of augmented reality and mobile connectivity, as well as where the market is today and how it can be leveraged to deliver groundbreaking interactive campaigns and engaging media. We'll dive into some of the augmented reality campaigns, pros and cons of AR and QR codes, and a series of platforms on which you can make your own location based augmented reality applications. Also discussed is http://geoloqi.com, a service and platform for building location-aware applications.
Remember the Milk: Location-based Apps and the MarketplaceAmber Case
Slides from a speech to the Software Association of Oregon on November 10, 2010 at the Multnomah Athletic Club.
---
There’s a message from your future and it’s telling you to remember to pick up milk.
What will you learn:
1. Why developers of apps should look at what users want to do now, as well as what users want to do in their future.
2. Why social apps should try to mirror real–world relationships
3. Why sharing should be about who you share with as well as how long you want the information to be available.
4. Why developers should think about making apps "ambient” and require less user interaction
Amber Case and her partner Aaron Parecki are the founders of GeoLoqi. GeoLoqi is a private, real-time mobile and web platform for secure location data, with features such as Geonotes, proximal notification, and sharing real-time GPS maps with friends. Geoloqi has been covered in the Willamette Week and Oregon Business. It has been presented at eComm, Open Source Bridge, Show and Tell PDX and Research Club under the alias Non-Visual Augmented Reality with SMS and GPS.
Plastic Time and the Future of the InterfaceAmber Case
This was a speech that @caseorganic and @aaronpk gave at the Intel campus to the Interaction and Experience group on Monday, Sept 20th, 2010.
This speech covers elements of home automation, GPS, SMS, location sharing, geotriggers, Geonotes and other mashups that can be done using IRC as a control hub.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
The History and Future of VR and AR
1. liveworx.com # L I V E W O R X
THE HISTORY
AND FUTURE OF VR AND AR
Amber Case | @caseorganic
MIT Media Lab and Harvard Berkman Klein Center
case@caseorganic.com
49. caseorganic.com
1. Machines shouldn't act like humans
2. Humans shouldn't act like machines
IV. Technology should amplify the
best of technology and the best of
humanity.
66. 66# L I V E W O R X
WE WANT YOUR
FEEDBACK!
Please remember to complete
your evaluation by selecting the
session in your mobile app.
Survey
Editor's Notes
More about steve mann
But not the cyborgs you think.
Our first tools were extensions of the physical self
We’ve been cyborgs from the first tools
But – they’ve extended physical selves – not the mental selves.
Flickr: cybertoad but really we've always been borg from the first toolsAttribution-NonCommercial-NoDerivs 2.0 GenericYou are free:to Share — to copy, distribute and transmit the workUnder the following conditions:Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). What does "Attribute this work" mean? The page you came from contained embedded licensing metadata, including how the creator wishes to be attributed for re-use. You can use the HTML here to cite the work. Doing so will also include metadata on your page so that others can find the original work as well. Noncommercial — You may not use this work for commercial purposes. No Derivative Works — You may not alter, transform, or build upon this work.
And technology extendsthe mental self.
But these new tools bring with them very curious things.
They cry, and we have to pick them up.
We have to replace them.
There are a number of issues to address with wearables:
Look and Feel
The look and feel of a device is extremely important, as poorly-designed, yet workable HUDs will decrease a user's social status, this preventing wide adoption.
Transparency and Reduncancy
Steve Mann's successful HUD was a transparent display with input into one eye by laser input. Currently there are wearables that obscure the user's display from both eyes. Not only is this dangerous in terms of not having a back-up real-world sensor available at all times to the user (the user's own calibrated eye) but it increases the chances of nausea, and the entire contraption suffers from lag if the graphics are not rendered in real-time or if there is a network error.
Overdesign
Almost all AR is designed to "pop" or impress. Most of it is a trick pony that unnecessarily overstimulates the brain of a user. The example I always give is the early web and the giant rush of companies and startups to make an index or navigable way to "surf" the web. Many tried visual views of the different "sections" of the web, and some even tried to render a 3D view that users could explore around. However, users didn't want to "explore", especially over a 14.4K connection on a 233 Megahertz machine. E-mail was sufficient enough to receive hyperlinks to interesting things on the web. What people needed was an architecture that was optimized for speed. Google's no-frills and speedy interface provided that solution.
AR currently suffers from a bout of coolness and has not yet reached the trough of disillusionment. It is my hope that the future of AR will see the design of minimalistic interfaces that actually solve real-world problems. There is a long way to go to clear away the junk that has piled up around industry. Perhaps when the field matures it will no longer be called AR.
How can this be helped?
User Research
I then pointed out that there was probably a very simple way to know what entry-level real-life situations would be helped by HUDs. For instance, when you watch people using their phones, pay close attention to the situations in which people are completely stuck into their phone or can't do a task because they are trying to look at their phone and the real world at the same time while moving. see the situations when people are stuck into their phone - and then those instances are what needs to appear on the phone. those situations present problems that van be solved by entering those use cases into the head's up display.
Input other than gestures is not usually discussed
Knowing what the user needs
If we know exactly what is importer to the user we will know the problems to solve -/ we will not be able to above all of the problems -- but if we just solve one or two that is enough
Minimum Viable Product Features
The first iPhone was very simple. While it didn't have GPS or 3G, it make it easy to do some things better than others. It was an incremental progression over previous methods of interacting with data.
A Gradual Experience
Every user needs to have an experience that grows over time. They can't just start out with all of the complexity that a system provides.
A user is very trainable over time to the point that when they have a device that they know exactly what they are going to do with it when they pick it up or put it down. When you watch someone with a smartphone, they have an idea of what they want to do with it when they touch it. You can watch them know what to do when they open up their phone.
If we aim low we may have more chances of success. At the very least we should design a HUD that where we don't get nausea from it or receive too much information - somehow what to focus on - what is the thing to focus on - I think it can be a key.
What should be the thing you would like to have in glasses that you would like to have that would motivate the use of glasses vs. the use of the mobile phone?
If my car breaks down is it possible to become my own mechanic? That would be disruptive in taking mechanics out of the way. Order parts from amazon from the device that you need. Expert systems that overlay on the eyes that highlight the areas of work on the vehicle and teach the user how to fix minor problems.
Concept Models
The best way to get a product point across is a design model where someone really puts some thought into it. For some odd reason, designers don't really have to be able to build or wire up objects, although the best of them can. MIT's media lab teaches both design and development. Inseparable from each other.
And if one cannot 3D animate, carving an object or building it from paper and Photoshopping it can get the point across too. As long as the essence of the idea is communicated visually, what it takes to get it there doesn't matter one bit.
Fashionable, Feasible Prosthetics and Social Status
For wide adoption wearables need to be able to increase one's social viability. Vs. Detract from it. Not interfering with the social norm. Not detracting from one's sociability.
Like a Mercedes Benz or a BMW adds to your social status. An old Geo Metro may detract from social status, although it is a far more robust, affordable, gas-efficient and maneuverable vehicle.
References
Further Reading
To get to these hyperlinked memories, we must become increasingly skilled virtual paleontologists. The E-mail inbox is the best example of this. Every day our memories and data is covered by a new layer of dust, spam, and items to be responded to. If we need something from our past, we must dig through the newly accumulated items in order to get it. But instead of using a hammer and a chisel, brush and field notebook, we use keywords and search results, tags and categories.
Simultaneous time also causes social punctuation, as technosocial connectivity seeps into every part of social relations.
the future is already here, it's just unevenly distributed. so i try tolook at the past when I want to see the future. And one ofthese pieces of the future is this….
Steve mann.
Experimental set-up to induce the 'body swap illusion'.
"The body-swap illusions worked well even though the mannequin or the other person looked different from the participant. In the first experiment there was no significant difference in rating scores between male and female subjects in the synchronous illusion condition, despite the fact that we only used a male mannequin (N = 32, p = .613, F(1,223)=.257, ANOVA). Similarly, in the second experiment, male and female subjects alike were able to accept the arm of the female experimenter as their own. Further, we compared the threat-evoked skin conductance responses between males and females after threatening the new artificial body. To obtain sufficient numbers of males and females to enable a statistical comparison of the SCR, we pooled the data from the synchronous and asynchronous conditions where the stimulation was applied on the abdomen in experiments three and four. We found no significant difference in the illusion related SCR between males and females (p = .952, F = .004, Two Way Repeated Measures ANOVA). These observations suggest that gender identity, and differences in the precise shape of the bodies, are not important factors for perceiving a body as one’s own."
www.ts-si.org/neuroscience/3636-identity-a-the-illusion-o...
Original from journal.pone.0003832.pdf (page 4 of 9)
Collaborative Reality
According to Steve Mann: A shared reality or collaborative mediated reality is "a negotiation between two parties allowing one to temporarily access the viewpoint of another".
In this instance, this guy is walking around in the store trying to purchase milk. His wife at home can see what he's seeing and can help him choose the right product.
Microvision: Wearable Displays Gallery
www.microvision.com/wearable_displays/wearable_applicatio...
Applications of a device which places data over your eyes in seemingly innocuous glasses.
We’re all growing up connected.
Getting used to your second self.
Testing Doug Englebart's Cyborg Glove
With student of Donna Haraway testing Valerie Landau and Doug Englebart's Cyborg Glove
We’re all growing up connected.
Getting used to your second self.
HandyKey - Twiddler2 - one handed chording USB keyboard
Wearables:"Twiddler was one of the first components I bought when designing my wearable computer. After six years of everyday use, I wouldn't think of using a wearable without one. The convenience and ergonomic benefits become apparent with long-term use. In fact, for the last two years, the Twiddler and my wearable computer have replaced my desktop (e.g. my PhD thesis was written with the Twiddler).
When starting the MIT Wearable Computing Project, I issued every member a Twiddler as their primary text input device. With starting another group at Georgia Tech focused on wearable computing, I've just placed an order for 10 more Twiddler 1's. We've seen typing speeds of 60 words per minute, and an undergraduate has reported speeds up to 30 words a minute with only a weekend of practice. More generally, new users can learn the alphabet in 5 minutes and can be touch typing in an hour. Though it takes time for the fingers to "loosen up" to accomodate the new motion (much like learning to play an instrument or learning how to type on a desktop), many new users are up to 10 words a minute with a weekend's worth of practice, and current non-touch typists remark that it is easier than learning the desktop QWERTY keyboard.' "Thad Starner, Professor at Georgia Tech and former MIT Media Lab.
"I'd like to say that I have been very happy with the Twiddler. I've been tinkering with wearable computers for some 15 years now, and never come across a better input device. I've designed and built a number of input devices from microswitches and the like -- before the Twiddler was being manufactured, but I really do like the Twiddler, despite its 1 or 2 shortcomings. It gives me the same sense of tactile feedback that I get from a high quality microswitch, enabling me to control various kinds of apparatus without my needing to pay full attention to the screen...If you need any ``testimonials'' from an experienced tinkerer, designer, builder, and user of wearable computing, I'd be happy to recommend Twiddler to wearable computer users, over and above voice (or certainly at least in addition to), eye movement trackers, and all of the other ways of controlling computers or external devices."Steve Mann, Professor, University of Toronto, Electrical Engineering Dept.
www.handykey.com/
Keymap for Chording on the Twiddler
The keymap for chording on the Twiddler. On the right, each grid of 3 × 4 rectanglesrepresents the keypad from the user’s perspective. The shaded rectangles are the buttons that need to be depressed to type the character printed below each keypad. Also displayed is a four-digit textual representation of the chord.
----
Source:Experimental Evaluations of the Twiddler One-Handed Chording Mobile Keyboard
By: Lyons, Kent; Starner, Thad; Gane, Brian. Human-Computer Interaction, Dec2006, Vol. 21 Issue 4, p343-392, 50p, 6 Black and White Photographs, 2 Illustrations, 3 Diagrams, 6 Charts, 16 Graphs; DOI: 10.1207/s15327051hci2104_1; (AN 22914491) http://farm5.staticflickr.com/4012/4529035079_c2c262a370_o.jpg
VirtuSphere Virtual-Reality Simulator for Mil/LE Tactical Training
A company called VirtuSphere, Inc. (Sammamish, WA) has a product called, appropriately enough, VirtuSphere, which can apparently provide a rather unique Mil/LE tactical training and simulation experience. Due to its design, the VirtuSphere provides "infinite space" and claims to also provide "the most immersive [virtual reality a.k.a. "VR"] experience for simulated training, exercise and gaming." The VirtuSphere platform consists of a large hollow sphere that can rotate 360 degrees as the user walks, runs, somersaults, etc. inside it while wearing a wireless, head-mounted VR (virtual reality) display a.k.a. wireless VR headset. Co-invented by Nurakhmed “Ray” Latypov and Nurulla Latypov (both corporate officers at VirtuSphere, Inc.), the VirtuSphere has been developed with…
the assistance of a team of research scientists and developers at the HIT Lab (Human Interface Technology Lab) at the University of Washington, including Dr. Suzanne Weghorst, a senior research scientist and assistant director of research at the UW HIT Lab. The joint project between VirtuSphere and the HIT Lab was reportedly made possible through a Research and Technology Development (RTD) grant from the Washington Technology Center (WTC).
VirtuSphere was selected by the Office of Naval Research (ONR) for their Virtual Technologies and Environments (VIRTE) program (Phone: 703-696-0360, Email: 342_VR@onr.navy.mil) in October, 2005. Training & Simulation Journal (TSJ) reported on that event when it happened.
www.defensereview.com/virtusphere-virtual-reality-simulat...