This document provides an overview and introduction to developing applications for Google Glass. It discusses the key components of Glass including live cards, static cards and immersion. It then covers the software stack and looks at building a basic "Hello World" application using the Glass Development Kit (GDK). The document also discusses design patterns for Glassware including ongoing tasks, services and the Mirror API. It provides examples of using sensors, location services and other Glass features in applications. Finally, it discusses design principles for Glass and explores some example Glass apps.
The document provides an overview and agenda for an Android development tutorial being given in Tokyo, Japan in October 2009. It introduces key Android concepts like Activities, Views, Intents, Services, and Notifications. It also outlines the development environment and tools needed, including Ubuntu, Eclipse, and the Android SDK. The document guides attendees through creating their first basic Android map application, including designing the UI, adding elements to the layout, and running the application.
This document provides instructions for setting up an environment for Android development. It discusses downloading and installing the Java Development Kit (JDK), Android SDK, and Android Studio. It also covers configuring a real Android device or emulator for testing apps. The document demonstrates creating a basic "Hello World" Android app in Android Studio and reviewing the underlying code and manifest file. It then briefly describes a more advanced app that scans for nearby WiFi access points when a button is pressed.
This document provides instructor notes for a slide set on setting up an Android development environment. It recommends displaying the slides on an LCD projector while keeping the instructor's notes view open on their PC. It also notes that pausing the presentation every 20 minutes for activities helps keep things engaging. The document explains how to access the instructor notes in Presenter View and provides a reminder about the blackout button to interrupt slides.
Engineering and Industrial Mobile Application (APP) DevelopmentLiving Online
This 3-day training course covers engineering and industrial mobile application development for Android devices. The training will introduce Android development basics like environment setup, building a basic project with user interfaces and coding behaviors. It will also cover more advanced topics like task reminders, menus, user input, tablet development and publishing apps to the Google Play Store.
This document provides an overview of green computing. It discusses how computing devices can harm the environment through energy waste when not in use. Approaches to green computing include virtualization, power management, reducing e-waste, and recycling. Implementations involve software that enables sleep modes, replacing CRT monitors with LCDs, and using more energy efficient hardware. The future of green computing will involve reducing carbon emissions and making devices more energy efficient through improvements like new materials. Examples of industrial implementations are provided like thin clients and low-power notebooks. The conclusion is that consumers will increasingly demand green computing as environmental issues become more important.
Google Glass is an augmented reality smart glasses project developed by Google. It contains a camera, display, touchpad, microphone, bone conduction transducer, and sensors. The display is a 640x360 pixel prism projector equivalent to a 25-inch screen from 8 feet away. It runs Android and is voice-controlled, allowing users to take pictures, record videos, get directions, search, and share information by speaking commands. However, prolonged use may cause eye strain and privacy issues due to its always-on camera and display.
Google Glass is an augmented reality eyewear project developed by Google that features an optical head-mounted display. The device was announced in 2012 and prototypes resemble regular eyeglasses with a head-up display replacing one lens. Google Glass operates using voice commands and responds via a bone conduction transducer, displaying information in a smartphone-like format hands-free. Concerns have been raised about the safety of operating vehicles or machinery while wearing Google Glass due to potential for distraction from the head-mounted display.
Google Glass is a wearable computer with an optical head-mounted display that is voice-controlled. It displays information like smartphone notifications in a hands-free format and allows users to interact with the internet via voice commands. Some key technologies used include augmented reality, which supplements real-world views with computer-generated information, and bone conduction, which transmits sound via bone vibrations. While promising for capabilities like accessing information and sharing perspectives, Google Glass also raises privacy and distraction concerns.
The document provides an overview and agenda for an Android development tutorial being given in Tokyo, Japan in October 2009. It introduces key Android concepts like Activities, Views, Intents, Services, and Notifications. It also outlines the development environment and tools needed, including Ubuntu, Eclipse, and the Android SDK. The document guides attendees through creating their first basic Android map application, including designing the UI, adding elements to the layout, and running the application.
This document provides instructions for setting up an environment for Android development. It discusses downloading and installing the Java Development Kit (JDK), Android SDK, and Android Studio. It also covers configuring a real Android device or emulator for testing apps. The document demonstrates creating a basic "Hello World" Android app in Android Studio and reviewing the underlying code and manifest file. It then briefly describes a more advanced app that scans for nearby WiFi access points when a button is pressed.
This document provides instructor notes for a slide set on setting up an Android development environment. It recommends displaying the slides on an LCD projector while keeping the instructor's notes view open on their PC. It also notes that pausing the presentation every 20 minutes for activities helps keep things engaging. The document explains how to access the instructor notes in Presenter View and provides a reminder about the blackout button to interrupt slides.
Engineering and Industrial Mobile Application (APP) DevelopmentLiving Online
This 3-day training course covers engineering and industrial mobile application development for Android devices. The training will introduce Android development basics like environment setup, building a basic project with user interfaces and coding behaviors. It will also cover more advanced topics like task reminders, menus, user input, tablet development and publishing apps to the Google Play Store.
This document provides an overview of green computing. It discusses how computing devices can harm the environment through energy waste when not in use. Approaches to green computing include virtualization, power management, reducing e-waste, and recycling. Implementations involve software that enables sleep modes, replacing CRT monitors with LCDs, and using more energy efficient hardware. The future of green computing will involve reducing carbon emissions and making devices more energy efficient through improvements like new materials. Examples of industrial implementations are provided like thin clients and low-power notebooks. The conclusion is that consumers will increasingly demand green computing as environmental issues become more important.
Google Glass is an augmented reality smart glasses project developed by Google. It contains a camera, display, touchpad, microphone, bone conduction transducer, and sensors. The display is a 640x360 pixel prism projector equivalent to a 25-inch screen from 8 feet away. It runs Android and is voice-controlled, allowing users to take pictures, record videos, get directions, search, and share information by speaking commands. However, prolonged use may cause eye strain and privacy issues due to its always-on camera and display.
Google Glass is an augmented reality eyewear project developed by Google that features an optical head-mounted display. The device was announced in 2012 and prototypes resemble regular eyeglasses with a head-up display replacing one lens. Google Glass operates using voice commands and responds via a bone conduction transducer, displaying information in a smartphone-like format hands-free. Concerns have been raised about the safety of operating vehicles or machinery while wearing Google Glass due to potential for distraction from the head-mounted display.
Google Glass is a wearable computer with an optical head-mounted display that is voice-controlled. It displays information like smartphone notifications in a hands-free format and allows users to interact with the internet via voice commands. Some key technologies used include augmented reality, which supplements real-world views with computer-generated information, and bone conduction, which transmits sound via bone vibrations. While promising for capabilities like accessing information and sharing perspectives, Google Glass also raises privacy and distraction concerns.
Google Glass Development Kit - Developer ZoneUtpal Betai
The document provides information about building applications for Google Glass using the Glass Development Kit (GDK). It discusses the different options for building the user interface, including live cards, static cards, and immersions. It also covers touch gestures, creating menus, and other development topics. The GDK allows access to hardware features and full control over the user interface compared to the Mirror API.
This presentation discusses how to create Glassware using the Mirror API, the GDK, and HTML5, along with a discussion of Live Cards and Immersions.
Various demos are presented, and you will see a quadcopter launched, along with the code.
This document discusses integrating Google Glasses with SAP using the Mirror API and Google API ABAP Client. It describes the different types of cards that can be used like static, live, and immersion cards. It also provides an overview of the Mirror API and how to use OAuth2 for authentication. The Google API ABAP Client library allows accessing Google APIs from ABAP. Examples are given of reporting applications using this library to interface with the Mirror API and publish cards to Google Glasses.
The document provides an overview of the 2014 Android I/O conference. It outlines the key topics covered which include updates to Android Wear, TV, Auto, Glass and the Google Play services. Material design was highlighted as a new visual language for developers. Improvements in ART, notifications, recent apps and power efficiency in the Android L preview were also summarized. The document concludes with mentioning other topics like cloud computing, Android Studio, personal unlocking and the Nest API.
Profiling is important for
mobile apps, in particular for the Android platform. This presentation summarizes the state
of the art in profiling Android applications as of Android version 4.3 "Jelly Bean"
After I attended Google IO 2014, I wanted to present what is new for Android Lollipop from a Developer perspective.
This presentation covers almost everything except, maybe, native Android Wear development, Android Auto and Android TV
Android development - the basics, MFF UK, 2012Tomáš Kypta
This document provides an overview of Android development basics. It discusses the Android platform, ecosystem, and SDK tools. It describes key Android concepts like activities, services, content providers, and broadcasts. It also covers user interface components, resources, handling different device configurations, fragments, threads, menus, dialogs, notifications and more. The document is intended as an introduction to the fundamentals of Android development.
Google I/O 2019 - what's new in Android Q and JetpackSunita Singh
Google IO 2019 highlighted several new Android features including Bubbles for easy multi-tasking, dark theme support, sharing improvements, and gesture navigation. It also provided updates on Jetpack components like CameraX, Navigation, Compose, ViewPager 2, ViewBindings and WorkManager to improve development. Machine learning was expanded through updates to MLKit and new features for on-device translation and object detection.
The Glass Class - Tutorial 4 - GDK-Live CardsGun Lee
Tutorial 4: GDK - Live Cards
The Glass Class at HIT Lab NZ
Learn how to program and develop for Google Glass.
https://www.youtube.com/watch?v=lnMKRpmtV-o&list=PLsIGb72j1WOlLFoJqkhyugDv-juTEAtas
http://arforglass.org
http://www.hitlabnz.org
The document summarizes the top 10 new features in Android M:
1. Android M preview timeline and expected Q3 2015 release.
2. Changes to app permissions including runtime permissions and reduced install/update friction.
3. How to properly request and handle permissions.
4. New fingerprint API and authentication without sharing credentials.
5. Doze mode for better battery life when idle and postponing non-important tasks.
The document discusses best practices for developing Android applications, including an overview of Android and its components, tools for Android development in Java using the Android SDK, guidelines for designing user interfaces, and recommendations for common UI patterns like dashboards and action bars. It provides resources for documentation, help, icon templates, and prototyping tools to help developers code well-designed Android apps.
Getting started with android dev and test perspectiveGunjan Kumar
The presentation covers basic intro to Android, how to get started with development, including instructions on setup, common UI usages like menus, dialogs; details on services like Sensors, Location and Google Maps
It also covers ideas on how to test including details on shell and installation instructions without using Eclipse
Android app developers in bangalore- thorsigniacharan Teja
Android App Development – With a technical enhanced team of Android App Developers in Bangalore,Thorsignia is bringing ideas into life. Our brilliant and sustained effort makes us the leading Android App Development Company in Bangalore.
This document provides an overview of mobile application development for Android. It discusses the Android platform architecture and application framework. The key application building blocks in Android like activities, intents, services and content providers are explained. It also describes the development tools and steps to create a simple "Hello World" application in Android. These include setting up the Android SDK, creating a new project in Eclipse, designing the UI layout and adding code to the activity. The document emphasizes that Android provides APIs for common tasks and uses the Java programming language for application development.
This document provides an overview of Android programming. It defines Android as an open-source operating system and development platform for mobile devices. Key points covered include Android's version history, core features and capabilities, the software stack and development framework, important terminology, and application fundamentals. Native Android applications like email, SMS, and maps are also briefly mentioned.
This document provides an overview of the mobile game development process from idea to product launch. It discusses developing the game idea and defining the game world. It also covers important aspects of game design like the audience, categories, playability, player progression, monetization, and usability. The document then discusses development with the Unity game engine, including dealing with device fragmentation, the GUI, profiling performance, asset stores, and best practices. It concludes with sections on social network integration and preparing for game launch, marketing, and monitoring.
The document discusses the history and evolution of smartphones from early devices like the Motorola DynaTAC 8000x to modern smartphones. It covers key aspects of smartphones like operating systems (Android, iOS, etc.), mobile development platforms, and the architecture and components of the Android operating system. It provides instructions on setting up development environments and outlines the basic process for creating a simple "Hello World" Android app, including key files like the manifest, layout files, and Java source code. Finally, it discusses a more complex example app for scanning and displaying nearby WiFi access points.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Google Glass Development Kit - Developer ZoneUtpal Betai
The document provides information about building applications for Google Glass using the Glass Development Kit (GDK). It discusses the different options for building the user interface, including live cards, static cards, and immersions. It also covers touch gestures, creating menus, and other development topics. The GDK allows access to hardware features and full control over the user interface compared to the Mirror API.
This presentation discusses how to create Glassware using the Mirror API, the GDK, and HTML5, along with a discussion of Live Cards and Immersions.
Various demos are presented, and you will see a quadcopter launched, along with the code.
This document discusses integrating Google Glasses with SAP using the Mirror API and Google API ABAP Client. It describes the different types of cards that can be used like static, live, and immersion cards. It also provides an overview of the Mirror API and how to use OAuth2 for authentication. The Google API ABAP Client library allows accessing Google APIs from ABAP. Examples are given of reporting applications using this library to interface with the Mirror API and publish cards to Google Glasses.
The document provides an overview of the 2014 Android I/O conference. It outlines the key topics covered which include updates to Android Wear, TV, Auto, Glass and the Google Play services. Material design was highlighted as a new visual language for developers. Improvements in ART, notifications, recent apps and power efficiency in the Android L preview were also summarized. The document concludes with mentioning other topics like cloud computing, Android Studio, personal unlocking and the Nest API.
Profiling is important for
mobile apps, in particular for the Android platform. This presentation summarizes the state
of the art in profiling Android applications as of Android version 4.3 "Jelly Bean"
After I attended Google IO 2014, I wanted to present what is new for Android Lollipop from a Developer perspective.
This presentation covers almost everything except, maybe, native Android Wear development, Android Auto and Android TV
Android development - the basics, MFF UK, 2012Tomáš Kypta
This document provides an overview of Android development basics. It discusses the Android platform, ecosystem, and SDK tools. It describes key Android concepts like activities, services, content providers, and broadcasts. It also covers user interface components, resources, handling different device configurations, fragments, threads, menus, dialogs, notifications and more. The document is intended as an introduction to the fundamentals of Android development.
Google I/O 2019 - what's new in Android Q and JetpackSunita Singh
Google IO 2019 highlighted several new Android features including Bubbles for easy multi-tasking, dark theme support, sharing improvements, and gesture navigation. It also provided updates on Jetpack components like CameraX, Navigation, Compose, ViewPager 2, ViewBindings and WorkManager to improve development. Machine learning was expanded through updates to MLKit and new features for on-device translation and object detection.
The Glass Class - Tutorial 4 - GDK-Live CardsGun Lee
Tutorial 4: GDK - Live Cards
The Glass Class at HIT Lab NZ
Learn how to program and develop for Google Glass.
https://www.youtube.com/watch?v=lnMKRpmtV-o&list=PLsIGb72j1WOlLFoJqkhyugDv-juTEAtas
http://arforglass.org
http://www.hitlabnz.org
The document summarizes the top 10 new features in Android M:
1. Android M preview timeline and expected Q3 2015 release.
2. Changes to app permissions including runtime permissions and reduced install/update friction.
3. How to properly request and handle permissions.
4. New fingerprint API and authentication without sharing credentials.
5. Doze mode for better battery life when idle and postponing non-important tasks.
The document discusses best practices for developing Android applications, including an overview of Android and its components, tools for Android development in Java using the Android SDK, guidelines for designing user interfaces, and recommendations for common UI patterns like dashboards and action bars. It provides resources for documentation, help, icon templates, and prototyping tools to help developers code well-designed Android apps.
Getting started with android dev and test perspectiveGunjan Kumar
The presentation covers basic intro to Android, how to get started with development, including instructions on setup, common UI usages like menus, dialogs; details on services like Sensors, Location and Google Maps
It also covers ideas on how to test including details on shell and installation instructions without using Eclipse
Android app developers in bangalore- thorsigniacharan Teja
Android App Development – With a technical enhanced team of Android App Developers in Bangalore,Thorsignia is bringing ideas into life. Our brilliant and sustained effort makes us the leading Android App Development Company in Bangalore.
This document provides an overview of mobile application development for Android. It discusses the Android platform architecture and application framework. The key application building blocks in Android like activities, intents, services and content providers are explained. It also describes the development tools and steps to create a simple "Hello World" application in Android. These include setting up the Android SDK, creating a new project in Eclipse, designing the UI layout and adding code to the activity. The document emphasizes that Android provides APIs for common tasks and uses the Java programming language for application development.
This document provides an overview of Android programming. It defines Android as an open-source operating system and development platform for mobile devices. Key points covered include Android's version history, core features and capabilities, the software stack and development framework, important terminology, and application fundamentals. Native Android applications like email, SMS, and maps are also briefly mentioned.
This document provides an overview of the mobile game development process from idea to product launch. It discusses developing the game idea and defining the game world. It also covers important aspects of game design like the audience, categories, playability, player progression, monetization, and usability. The document then discusses development with the Unity game engine, including dealing with device fragmentation, the GUI, profiling performance, asset stores, and best practices. It concludes with sections on social network integration and preparing for game launch, marketing, and monitoring.
The document discusses the history and evolution of smartphones from early devices like the Motorola DynaTAC 8000x to modern smartphones. It covers key aspects of smartphones like operating systems (Android, iOS, etc.), mobile development platforms, and the architecture and components of the Android operating system. It provides instructions on setting up development environments and outlines the basic process for creating a simple "Hello World" Android app, including key files like the manifest, layout files, and Java source code. Finally, it discusses a more complex example app for scanning and displaying nearby WiFi access points.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
5. How to operate
On startup
Swipe
right
Swipe
left
Settings and life cards
Static cards and time-line
Say
“ok glass”
Voice
commands
Tap
Installed apps
13. Activity
●
self contained form
– with its own main thread to handle UI events
●
Derive from base class android.app.Activity
●
onCreate method
●
Other methods: onAttachedToWindow
40. Camera
private static final int TAKE_PICTURE_REQUEST = 1;
private void takePicture() {
Intent intent = new
Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent,
TAKE_PICTURE_REQUEST);
}
@Override
41. Location Service
●
Obtain Location Manager
– LocationManager manager = getSystemService(Context.LOCATION_SERVICE);
●
Create criteria (accuracy level/altitude required?)
– Criteria criteria = new Criteria();
– criteria.setAccuracy(Criteria.ACCURACY_FINE);
– criteria.setAltitudeRequired(true);
●
Get providers list
– List<String> providers = locationManager.getProviders(
– criteria, true /* enabledOnly */);
●
For each provider in List, get location updates
– locationManager.requestLocationUpdates(provider, 0,
– 0, location_listener);
42. Location Service: Listener
●
Create subclass of LocationListener to pass in
requestLocationUpdates
– public void onLocationChanged(Location location)
●
Location object
– double getLatitude() → Get the latitude, in degrees.
– double getLongitude() → Get the longitude, in degrees.
44. Other sensors: Listener
●
Create subclass of SensorEventListener to pass in registerListener
– public final void onSensorChanged(SensorEvent event)
●
SensorEvent object
– float sensorVal = event.values[0];
46. Recap of UI Elements
The Glass user experience is based on 3 basic UI
elements
• Static card:
– Displays text, HTML, images, and video.
• Live card
– Used when users are actively engaged in a task
– Do not persist in the timeline
– Suited for real-time interaction with users
• Immersion
– Displays Android activities that take over the timeline
experience
48. What is Mirror API?
• Web-based services that interact with Google Glass
• Over a cloud
• Does not require running code on Glass
• User visits your web application from MyGlass and authenticates
• Your service sends cards to user as required
49. Why Mirror API?
• Lets you use language of your choice
• Platform independence
• Common infrastructure
• Built-in functionality
50. When to use Mirror API?
• Mirror API is great for delivering periodic
notifications to users as important things happen.
• For example:
– News delivery service that sends the top news stories
51. How Mirror API Works?
• Use RESTful services
• Encapsulate data through
JSON
• Authentication using
OAuth2
• Standard POST, GET and
PUT methods for sending,
listing and updating
information
52. Example: Daily Health Tip
• Daily Health Tips is implemented using Mirror API
– Users subscribe by authenticating with OAuth 2.0
– Health Tips stores an index of users and their
credentials
– New tip is published every day. It iterates through all
stored users and insert a timeline item into their
53. Inserting a New Static Card
• When Glassware inserts static cards into the
timeline, Glass may play a notification sound to
alert users.
54. Static Card
• Reside to the right of the Glass clock by default
• Display information to the user
• Do not require immediate attention
• Users can read or act on
the card at their
own leisure
55. What Else Can You Do?
• In addition, you can attach objects to a static card,
such as a location or media.
• Insert timeline card with attachment
• Attach video
57. Menu items
• Menu items allow users to request actions that are
related to the timeline card.
• Some Built-in menu items include:
– reading a timeline card aloud,
– navigating to a location,
– sharing an image,
– replying to a message
58. Subscriptions
• The Mirror API allows you to subscribe to
notifications that are sent when the user takes
specific actions:
– Reply
– Delete
– Custom menu item selected
– Location update
– Voice command
• When you subscribe to a notification, you provide a
callback URL that processes the notification.
59. Design Principles for Glass
• Different than existing mobile platforms in both
design and use. Follow these principles for best
experience:
– Design for Glass
– Don't get in the way
– Keep it relevant
– Avoid the Unexpected
60. LET US NOW EXPLORE SOME
GOOGLE GLASS APPS BY
10PEARLS
LET US NOW EXPLORE SOME
GOOGLE GLASS APPS BY
10PEARLS
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
A quick recap about 3 basic ui elements again.
Static cards exists on right side of timeline. And displays text and media. It persists in timeline, more like your facebook timeline posts.
Live cards exists on left side and engage user in a task. It is suitable for real time interaction with users.
Immersions are more like mobile applications and are like live cards but are more flexible for the developer. It captures voice commands, do not dim after a specified time (static and live cards dim out) takes swipe gestures and more. All these features are not possible with static or live cards
There are 2 major ways you can develop your glassware. One is using the GDK – which is a more tradional approach to development. It is similar to android development.
Mirror API is a newer aspect which uses web applications to build the structure.
Mirror API are web based services that we use with the glass.
You don’t need to worry about a particular language or platform.
You work on upper layer and the platform specific library does the rest of the work for you.
Lets look at more technical aspect of mirror api as to know how it works.
User goes to MyGlass, finds the application and authenticate himself for the app.
Glassware stores the user in personal database (glassware is just a fancy name for glass apps)
Now glassware is a service which is running as per functional requirements on web end. For this particular glassware, we schedule execution every 24 hours. It iterates through all stored user from the database and asks google to insert a health tip on to user’s timeline.
Mirror API is responsible to sync glass with latest information