IBM has updated its 'IBM Seer' mobile application to combine augmented reality with live location-based video streams
The capactive touchscreen is really a microcosm of the convergence of digital and physical systems at the heart of the Outernet and IoT. It is really a very dense grid of sensors that detect a finger’s touch, direction and sped of of motion, the number of fingers touching (for zoom and shrink). But is also represents the very literal way that the interface between human beings and sensor-rich computers is becoming more intimate, more powerful and more embedded in our lives and societies. UCLA’s Deb Estrin notes: Modern smartphones are rich portable computer with onboard sensors. Specifically, it is a location-aware (GPS), motion-aware (accelerometer), directionally-aware (digital compass) visually aware (camera being used to scan QA codes or serve as visual input), sonically aware (microphone and speakers), always-connected (wireless or 3Gs) handheld computer. And what smartphones have today, billions of objects and devices will also have in a few years. Sensing and computing will become ubiquitous, pervasivem broadband and wireless.
*2 billion people on the Web by 2011, according to the Computer Industry Almanac. **A trillion connected objects, according to &quot;From Autonomous to Cooperative,&quot; ERCIM Workshop on eMobility.
One of the most commonplace things -- televisions – are already well on their way to becoming internet-connected devices that can not only stream content on demand from the Web via ethernet or wireless, but will begin to offer other kinds of services like Skype videoconferencing.
Cars are already packed with sensors that control braking, traction, rollover and stability, airbags and other vehicle systems. While the steel in a vehicle used to be the most valuable component, In fact today the electronics in a modern car are. New models include radar to support intelligent cruise control, sensors to avoid lane drift or vehicles in a drivers blind spot. Alan Taub, vice-president for R&D at General Motors, expects to see semi-autonomous vehicles on the highway by 2015. They will need a driver to handle busy city streets or negotiate complex junctions, but once on the highway they will be able to steer, accelerate and avoid collisions unaided. A few years on, he predicts, drivers will be able to take their hands off the wheel completely: “I see the potential for launching fully autonomous vehicles by 2020.”
The &quot;Whuffie Meter&quot; merges your physical presence with that of your online social identity. Socializing will take on completely new dimensions when people can see everything public about a person on semi-transparent infographic displays floating over their heads, right as they are talking with them. &quot;Bodynet&quot; concept scenario, future technologies will monitor our body's vital conditions and compute the outcome of our actions on-the-fly. So this technology allows you to enjoy that McDonalds meal even more, being assured by a floating data dashboard how it will shorten your estimated lifespan with several weeks.
Augmented Reality Has Arrived: What PR Firms Should Know
<ul><li>John C Havens: SVP Social Media, Porter Novelli and author of, “Tactical Transparency” </li></ul><ul><li>Jack Mason: Global Business Services, Strategic Programs and Social Media, IBM </li></ul>
Mobile Devices, Sensors and the Deep Data they Generate are Turning the World Wide Web into the Web Wide World
There are now 5 billion mobile phones in the world, and 2 billion people will be on the Web by 2011 ... soon there will be a trillion connected objects – cars, appliances, cameras, roadways, pipelines – comprising the "Internet of Things."
Wal-Mart Acquires Vudu: Adds Its Clout to Movie Streaming NY Times, Feb 2010
An open-source platform for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication http://bit.ly/cxAPzM
Frog Design Study — Computing 2020 information aesthetics http://bit.ly/94dcdV
I’m at a Conference and I don’t remember anyone’s name because I don’t have to. I’m wearing my AR glasses and will be able to use facial recognition to pull up people’s names plus our most recent transcribed conversation (which had excellent SEO as I’ve been using popular keywords when I talk ) and I’m wearing a shirt with a QR code maximized for visual search. At the bar last night I saw a cute girl’s geo Tweets and sent her a text offer for a beer. Went shopping yesterday for a jacket and they had blue-tooth enabled smart shelves with RFID . I put one jacket down and three feet later got an email with a discount offer on another jacket. I don’t think like I used to anymore. It’s like GPS—I don’t remember how to read a map. Or recall my phone number. Or know how to look up the definition to a word. I tend to only make choices now via Group Think after hearing from all my Social Networks along with multiple people I’ve never met……….