My personal view of the top learning technologies for 2016. Taken from a range of academic and industry sources, with a point of view on how they can be used in Learning and Development.
iBeacon is the name for Apple’s technology standard, which allows Mobile Apps (running on both iOS and Android devices) to listen for signals from beacons in the physical world and react accordingly. In essence, iBeacon technology allows Mobile Apps to understand their position on a micro-local scale, and deliver hyper-contextual content to users based on location. The underlying communication technology is Bluetooth Low Energy. Bluetooth Low Energy is a wireless personal area network technology used for transmitting data over short distances. As the name implies, it’s designed for low energy consumption and cost, while maintaining a communication range similar to that of its predecessor, Classic Bluetooth. BLE is ideal for simple applications requiring small periodic transfers of data. Classic Bluetooth is preferred for more complex applications requiring consistent communication and more data throughput.
Google - Importantly, beacons do not generally accept connections from other devices, meaning that the beacon itself cannot record what devices are in its vicinity.
The Watson Cognitive Tutor (WCT) is an interactive application for students providing:
Content exploration (related concepts, reactive documents, multiple representations)
Question generation around pre-requisite concepts during remediation
Formative and summative assessments
Gamification: points, badges, lock/unlock tests
Simple conversational chat to help troubleshoot
Learner Risk Stratification: We group learners into risk cohorts with respect to specific outcomes by generating insights from data-driven analytics of factors such as students' course performance, grade progression, attendance and engagement, socio-economic indicators, disciplinary actions, social/collaborative behavior, learning patterns and teacher effectiveness, to name a few.
Content Analytics: We apply deep analytic techniques on content and its usage to annotate content with rich meta-data related to learning concepts, content quality and effectiveness, interaction patterns etc. that help learners and teachers harness large content repositories effectively, and receive targeted recommendations.
Personalized Pathways: We recommend and record learner-specific interventions, and continuously learn from past data and outcomes to design more effective pathways for individual learners. A rich, instrumented learning platform (that supports offline and tablet-based learning) provides personalized and adaptive learning support and integrates inside and outside classroom learning and feedback to provide a seamless blended learning experience.
Bluetooth Smart tech is set to increase by up to four times, enjoy a 100% increase in speed, and deliver faster data transfers.
The key concept behind Internet of Things (IoT) meshing is to enable connected things -- such as lights and thermostats that contain embedded sensor technologies -- to communicate without relying on PCs or dedicated hub services. This makes it much simpler to build a network of connected things and is, as a bonus, relatively inexpensive.
Emotional Recognition -These employ a suite of proprietary algorithms that analyse many facets of the user’s speech, such as the GVC Emotion Recognition and GVC Voice Disorder Detection algorithms. Uses include health, marketing, safety, workforce analysis. If businesses could sense emotion using tech at all times, they could capitalize on it to sell to the consumer in the opportune moment. Sound like 1984? The truth is that it’s not that far from reality. Machine emotional intelligence is a burgeoning frontier that could have huge consequences in not only advertising, but in new startups, healthcare, wearables, education, and more.
There’s a lot of API-accessible software online that parallels the human ability to discern emotive gestures. These algorithm driven APIs use use facial detection and semantic analysis to interpret mood from photos, videos, text, and speech. Today we explore over 20 emotion recognition APIs and SDKs that can be used in projects to interpret a user’s mood. Use Cases For Emotion Recognition
Smile — you’re being watched. The visual detection market is expanding tremendously. It was recently estimated that the global advanced facial recognition market will grow from $2.77 Billion in 2015 to $6.19 Billion in 2020. Emotion recognition takes mere facial detection/recognition a step further, and its use cases are nearly endless.
An obvious use case is within group testing. User response to video games, commercials, or products can all be tested at a larger scale, with large data accumulated automatically, and thus more efficiently. Bentley used facial expression recognition in a marketing campaign to suggest car model types based on emotive responses to certain stimuli. Technology that reveals your feelings has also been suggested to spot struggling students in a classroom environment, or help autistics better interact with others. Some use cases include:
Helping to better measure TV ratings.
Adding another security layer to security at malls, airports, sports arenas, and other public venues to detect malicious intent.
Wearables that help autistics discern emotion
Check out counters, virtual shopping
Creating new virtual reality experiences
A variety of other trends have led to an increased number of sensors embedded in many technologies and devices that we use personally and professionally. They become smarter as they gather more data on our daily patterns. Gartner predicts that these sensors, which tend to work in silos today will increasingly work in concert, leading to even greater insights about our daily patterns.
BYOD has now transformed into the “Device Mesh”.
IoT uses internet, device mesh use NFC, blue tooth, wi-fi etc.
More apps are being built to be plugged together, and the value of the combination is much greater than the sum of the parts. As Lyft has integrated with comparable offerings in other countries, its ability to expand its offering for traditional customers traveling abroad and the reverse has meant faster growth with minimal cost implications.
What is a mesh network? The answer varies a bit depending upon whom you ask, but the key is that mesh networks typically rely on wireless nodes rather than centralized access points to create a virtual wireless backbone. In other words, mesh networks wirelessly connect devices and computers directly without involving a phone company or ISP.
Gartner refers to these devices and sensors’ ability to gather more contextual data as described above as AMbient UX. The challenge will be with application design, anticipating this level of device synchronicity and collaboration, for lack of better framing. Gartner posits that the devices and sensors will become so smart that they will be able to organize our lives without our even noticing that they are doing so.
The device mesh creates the foundation for a new continuous and ambient user experience. Immersive environments delivering augmented and virtual reality hold significant potential but are only one aspect of the experience. The ambient user experience preserves continuity across boundaries of device mesh, time and space. The experience seamlessly flows across a shifting set of devices and interaction channels blending physical, virtual and electronic environment as the user moves from one place to another.