Electronic Tour Guide
Many visitors to Palestine and other countries need information about the buildings specially Archaeological
buildings while they passing beside it at their round, For this information they should ask Tour Guide or peoples, at many
times they didn't find answers for their questions, this paper will propose System (android application) to solve this problem.
The only thing should the visitor do is to direct his mobile towards the building to get the information about it. The application
will work as compass to determine the mobile direction using the Orientation sensor, in addition it will Determine the
longitude and latitude after accessing location APIs, then use the direction and GPS location to determine the building that
user direct his mobile towards using a new algorithm this paper will propose.
Many visitors to Palestine and other countries need information about the buildings specially Archaeological
buildings while they passing beside it at their round, For this information they should ask Tour Guide or peoples, at many
times they didn't find answers for their questions, this paper will propose System (android application) to solve this problem.
The only thing should the visitor do is to direct his mobile towards the building to get the information about it. The application
will work as compass to determine the mobile direction using the Orientation sensor, in addition it will Determine the
longitude and latitude after accessing location APIs, then use the direction and GPS location to determine the building that
user direct his mobile towards using a new algorithm this paper will propose.
Smart Way to Track the Location in Android Operating SystemIOSR Journals
This document describes a smart location tracking application developed for the Android operating system. It uses location services like GPS and network providers to determine a user's current location. The application then lists nearby destinations and calculates the distance to each from the user's current position. It allows users to filter destinations by state or theme. When GPS is unavailable, it can determine location through query processing. The design integrates GPS, network providers, Google Maps, and a database to provide location tracking and information about destinations to users.
The document describes a proposed idea called Track-O-Shoes, which uses sensors in shoes to track fitness metrics and location of the user. The shoes would contain a GPS sensor to track location in real-time, as well as proximity, accelerometer, gyroscope, and LED sensors to calculate step count, distance, activity time, calories burned, and provide light in dark conditions. The data would be sent to a mobile app via microcontroller. The app could then be used for fitness tracking and live location tracking of the user. Key components proposed include the sensors, microcontroller, and a mobile app compatible with Android devices.
1) The document proposes a method for gesture detection using a virtual surface detected by a webcam or front-facing camera on a laptop or smartphone.
2) By tracking the number and position of pixels representing an object's shape at different distances from the camera, the computer can detect movements of the object maintaining a constant distance as gestures on a virtual surface or plane.
3) This technique aims to enable gesture control of computer functions like mouse movement or app launching without requiring specialized cameras, as a cheaper and more portable alternative to physical input devices.
Top sensors inside the smartphone you want to knowsoniyasag
The document discusses the key sensors inside smartphones that enable their smart capabilities. It describes sensors such as the accelerometer, which senses device orientation; the magnetometer, which works with the digital compass to detect direction; the gyroscope, which maintains position and orientation; and the fingerprint sensor, which provides biometric authentication. It also mentions other sensors like the back-illuminated sensor for cameras, ambient light sensor for display brightness, GPS for location, proximity for detecting nearby objects, NFC for short-range connectivity, and more. All of these sensors collectively convert a normal phone into a smart device.
The document discusses the accelerometer and gyroscope sensors found in smartphones and their uses. The accelerometer measures device orientation and gravity while the gyroscope detects rotation along three axes. Together, they provide precise six-axis motion sensing by combining the three axes of each. This detailed motion data is used for camera image stabilization, gaming input, navigation apps, and general user interface interaction.
This document discusses augmented reality (AR) and how ARCore can be used for mobile AR applications in Unity. It defines AR, virtual reality and mixed reality. It explains the different types of AR like marker-based, location-based and markerless. It provides an overview of ARCore, describing how it performs motion tracking, environmental understanding and light estimation. It outlines the steps to get started with ARCore in Unity and discusses potential next steps like using light estimation, Google Poly and Google Resonance Audio in AR applications.
The document proposes developing a glove-based input device called TRIAD'32 that uses a 3-axis accelerometer to emulate 2D mouse functions for computers. It aims to allow for mouse control without requiring a flat surface by capturing 3D hand movements and transforming them into 2D cursor movements. The device would include buttons on the glove to perform left, right clicks and drag-and-drop functions. It would interface with Windows and achieve mouse-like resolution and response times. Experiments are conducted to test the accelerometer's functionality and a hardware filter.
Many visitors to Palestine and other countries need information about the buildings specially Archaeological
buildings while they passing beside it at their round, For this information they should ask Tour Guide or peoples, at many
times they didn't find answers for their questions, this paper will propose System (android application) to solve this problem.
The only thing should the visitor do is to direct his mobile towards the building to get the information about it. The application
will work as compass to determine the mobile direction using the Orientation sensor, in addition it will Determine the
longitude and latitude after accessing location APIs, then use the direction and GPS location to determine the building that
user direct his mobile towards using a new algorithm this paper will propose.
Smart Way to Track the Location in Android Operating SystemIOSR Journals
This document describes a smart location tracking application developed for the Android operating system. It uses location services like GPS and network providers to determine a user's current location. The application then lists nearby destinations and calculates the distance to each from the user's current position. It allows users to filter destinations by state or theme. When GPS is unavailable, it can determine location through query processing. The design integrates GPS, network providers, Google Maps, and a database to provide location tracking and information about destinations to users.
The document describes a proposed idea called Track-O-Shoes, which uses sensors in shoes to track fitness metrics and location of the user. The shoes would contain a GPS sensor to track location in real-time, as well as proximity, accelerometer, gyroscope, and LED sensors to calculate step count, distance, activity time, calories burned, and provide light in dark conditions. The data would be sent to a mobile app via microcontroller. The app could then be used for fitness tracking and live location tracking of the user. Key components proposed include the sensors, microcontroller, and a mobile app compatible with Android devices.
1) The document proposes a method for gesture detection using a virtual surface detected by a webcam or front-facing camera on a laptop or smartphone.
2) By tracking the number and position of pixels representing an object's shape at different distances from the camera, the computer can detect movements of the object maintaining a constant distance as gestures on a virtual surface or plane.
3) This technique aims to enable gesture control of computer functions like mouse movement or app launching without requiring specialized cameras, as a cheaper and more portable alternative to physical input devices.
Top sensors inside the smartphone you want to knowsoniyasag
The document discusses the key sensors inside smartphones that enable their smart capabilities. It describes sensors such as the accelerometer, which senses device orientation; the magnetometer, which works with the digital compass to detect direction; the gyroscope, which maintains position and orientation; and the fingerprint sensor, which provides biometric authentication. It also mentions other sensors like the back-illuminated sensor for cameras, ambient light sensor for display brightness, GPS for location, proximity for detecting nearby objects, NFC for short-range connectivity, and more. All of these sensors collectively convert a normal phone into a smart device.
The document discusses the accelerometer and gyroscope sensors found in smartphones and their uses. The accelerometer measures device orientation and gravity while the gyroscope detects rotation along three axes. Together, they provide precise six-axis motion sensing by combining the three axes of each. This detailed motion data is used for camera image stabilization, gaming input, navigation apps, and general user interface interaction.
This document discusses augmented reality (AR) and how ARCore can be used for mobile AR applications in Unity. It defines AR, virtual reality and mixed reality. It explains the different types of AR like marker-based, location-based and markerless. It provides an overview of ARCore, describing how it performs motion tracking, environmental understanding and light estimation. It outlines the steps to get started with ARCore in Unity and discusses potential next steps like using light estimation, Google Poly and Google Resonance Audio in AR applications.
The document proposes developing a glove-based input device called TRIAD'32 that uses a 3-axis accelerometer to emulate 2D mouse functions for computers. It aims to allow for mouse control without requiring a flat surface by capturing 3D hand movements and transforming them into 2D cursor movements. The device would include buttons on the glove to perform left, right clicks and drag-and-drop functions. It would interface with Windows and achieve mouse-like resolution and response times. Experiments are conducted to test the accelerometer's functionality and a hardware filter.
The document describes the Sixth Sense device, which allows for gesture-based interaction between the physical and digital worlds. It consists of a camera, projector, color markers, mirror, and mobile component. The camera tracks hand gestures tagged with color markers to capture inputs. The projector displays digital information on surfaces based on the inputs. The mobile component processes the data and sends output to the projector. Some applications include taking pictures, drawing, checking time, making calls, and zooming. Sixth Sense enables natural hand gestures to manipulate projected interfaces.
This document summarizes a seminar presentation on ISPHERE technology for intuitive 3D modeling. ISPHERE is a spatial input device designed to reduce cognitive load. It is a foldable dodecahedron with capacitive sensors on each face that can detect hand movements and positions. The hardware sensors are connected to a microcontroller and software maps the input to manipulate 3D objects in modeling programs. The system allows more natural interactions like squeezing and stretching parts of a virtual object to transform it.
This document summarizes an interactive touch board that uses an infrared camera and infrared stylus. It can turn any projected display into an interactive surface. The system uses a low-cost infrared camera to detect the position of an infrared light from the stylus tip. An image processing algorithm analyzes the camera image to determine the stylus coordinates and move the mouse cursor accordingly. The algorithm was implemented using NI LabVIEW. Experimental results found average accuracy of 98.9% and latency of 0.28 seconds at a resolution of 800x600 pixels. This low-cost design could enable interactive whiteboard applications in education.
The document discusses several sensors found in mobile phones: proximity sensors, ambient light sensors, GPS, accelerometers, compasses, and gyroscopes. Proximity sensors detect when a phone is placed near the ear to turn off the screen. Ambient light sensors adjust screen brightness based on lighting. GPS uses satellites to determine location. Accelerometers and compasses detect orientation and rotation. Gyroscopes measure orientation based on angular momentum. The sensors enhance usability and functions like auto-rotating maps based on physical orientation.
This document describes a smart eye wearable device with mounted cameras and sensors. The smart eye glasses can serve as an artificial eye for blind people and can also be used for surveillance. It has features like motion detection, distance measurement, temperature sensing, and GPS. The hardware includes wearable glasses, an Arduino microcontroller, cameras, sensors. Stereo vision using two camera images is used to estimate depth and disparity maps. The smart eye was designed to help blind people navigate surroundings and has potential applications in defense and remote display.
iSphere is a revolutionary input device for modeling 3D geometries through hand manipulation. This device is equipped with capacitive sensors that can respond to the proximity of hand positions to 3D modeling systems.
Sensors are devices that measure physical quantities and convert them to signals that can be read by instruments or observers. Modern cellphones contain many sensors like microphones, cameras, GPS, touchscreens, accelerometers, and gyroscopes that have replaced the need for separate devices. The document discusses how common sensors like infrared sensors, accelerometers, gyroscopes, touchscreens, and cameras work, as well as where they are used. It notes that sensors have improved over time by becoming smaller, faster, better, and cheaper due to advances in processing technology and manufacturing.
The document discusses different types of sensors used for 3D digitization, including passive and active vision techniques. It describes synchronization circuit-based dual photocells that improve measurement stability and repeatability. Position sensitive detectors are discussed that can measure the position of a light spot in one or two dimensions on a sensor surface to acquire high-resolution 3D images. A proposed sensor architecture combines color and range sensing for applications like hand-held 3D cameras.
This document provides an overview of augmented reality including:
- The key differences between augmented reality and virtual reality, with augmented reality overlaying digital elements on the real world while maintaining a sense of the real environment.
- The main components of an augmented reality system including head-mounted displays, tracking systems like GPS, and mobile computing power.
- How video and optical see-through displays work to merge real and virtual scenes.
- Potential applications of augmented reality and some performance issues that need to be addressed.
This document discusses virtual reality and its types and applications. It defines virtual reality as a computer-generated immersive or wide field multi-sensory information which tracks users in real time. The main types discussed are immersive virtual reality, window on world virtual reality, and telepresence virtual reality. Applications mentioned include architecture, medicine, engineering and design, entertainment, training, and manufacturing. Advantages include creating realistic worlds and enabling experimentation, while disadvantages include high equipment costs and inability to fully replicate real world movement.
Working with Windows Phone sensors, gps and mapsMalin De Silva
This document discusses working with location services and device sensors in mobile applications. It covers getting location data through GPS and setting accuracy levels. It also discusses using the Bing Maps API and integrating maps into apps. Regarding sensors, it describes how to use the accelerometer, compass, gyroscope, inclinometer and other sensors to gather motion and orientation data and provide examples of applications. The document is intended to demonstrate how to implement these features in mobile apps.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes a research paper on developing a location-based tracking application for Android. It describes the application called "Route Tracker" which allows users to track their movement and route on a map. The summary discusses the technologies used to develop the application such as Android classes for maps, locations and overlays. It also analyzes the complexity of classes used and constraints of deploying location-based services. The conclusion states that implementing location-based services requires integrating Android APIs with external APIs like Google Maps.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
The document summarizes a voice-controlled robot called Home Butler that is designed to assist handicapped individuals. The robot takes voice commands, locates requested objects using image processing and a database, navigates to the object using SLAM and LIDAR sensors, identifies the object with camera vision and image matching, grabs the object using sensors and motors, and returns it to the user by retracing its path. The robot integrates LabVIEW for voice decoding, MATLAB for image processing, and a Raspberry Pi operating system to run the integrated software and databases.
The document describes a project to create a wireless mouse using a SunSPOT sensor. Three approaches were considered: 1) connecting the sensor to a conventional mouse; 2) using the sensor independently; 3) using the sensor and base station to communicate with the computer. Approach 3 was selected because resources were available. The sensor sends tilt data to the base station, which translates it to mouse movements. Testing showed the sensor mouse works fully wirelessly but is very sensitive, requiring controlled movement. It allows using a mouse from a distance without wires.
An autonomous mobile robot platform has been assembled to automatically capture images from viewpoints spaced a few inches apart throughout an environment. The robot is controlled through a software interface that allows for monitoring image capture progress and interactively controlling the robot's speed, navigation along predetermined pathways, and collision avoidance using sonar. Future work will involve deploying the robot to capture large, real-world environments like the Purdue Union Ballroom through millions of high-resolution images and precisely computing its location using a laser positioning system.
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
This document discusses various machine vision techniques used to estimate the self-position of mobile robots in industries. It describes techniques such as GPS, vision-based localization using cameras and image processing, 2D and 3D map-based localization, and memory-based localization using image autocorrelation. Vision-based techniques analyze camera images to detect landmarks, match detected landmarks to a database, and calculate the robot's position. Map-based localization uses 2D overhead maps captured by laser sensors or 3D models to localize the robot. Memory-based localization generates unique autocorrelation images from camera views and matches them to stored images to estimate the position.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
The document proposes a framework that uses intelligent mobile devices to enable indoor wireless location tracking, navigation, and mobile augmented reality (AR). It discusses using mobile devices equipped with inertial measurement units (IMU) and multi-touch screens to provide user feedback to correct positioning errors. The framework also uses mobile AR through device cameras to help navigate users in complex 3D indoor environments and provide interactive location-based services. A prototype system was developed to demonstrate the feasibility of the proposed application framework.
This paper describes a decision tree (DT) based pedometer algorithm and its implementation on
Android. The DT- based pedometer can classify 3 gait patterns, including walking on level
ground (WLG), up stairs (WUS) and down stairs (WDS). It can discard irrelevant motion and
count user’s steps accurately. The overall classification accuracy is 89.4%. Accelerometer,
gyroscope and magnetic field sensors are used in the device. When user puts his/her smart
phone into the pocket, the pedometer can automatically count steps of different gait patterns.
Two methods are tested to map the acceleration from mobile phone’s reference frame to the
direction of gravity. Two significant features are employed to classify different gait patterns.
The document discusses various sensors used in mobile phones. It describes proximity sensors which detect how close the phone is to the user's face and turn off the screen to save battery during calls. It also explains GPS sensors which track location using satellites, ambient light sensors which adjust screen brightness based on light levels, accelerometers which detect orientation changes, compass sensors which indicate direction using magnetism, gyroscopes which detect motion, and back-illuminated image sensors which improve low-light photography. These sensors power many smart features in phones and help differentiate them from conventional devices.
The document describes the Sixth Sense device, which allows for gesture-based interaction between the physical and digital worlds. It consists of a camera, projector, color markers, mirror, and mobile component. The camera tracks hand gestures tagged with color markers to capture inputs. The projector displays digital information on surfaces based on the inputs. The mobile component processes the data and sends output to the projector. Some applications include taking pictures, drawing, checking time, making calls, and zooming. Sixth Sense enables natural hand gestures to manipulate projected interfaces.
This document summarizes a seminar presentation on ISPHERE technology for intuitive 3D modeling. ISPHERE is a spatial input device designed to reduce cognitive load. It is a foldable dodecahedron with capacitive sensors on each face that can detect hand movements and positions. The hardware sensors are connected to a microcontroller and software maps the input to manipulate 3D objects in modeling programs. The system allows more natural interactions like squeezing and stretching parts of a virtual object to transform it.
This document summarizes an interactive touch board that uses an infrared camera and infrared stylus. It can turn any projected display into an interactive surface. The system uses a low-cost infrared camera to detect the position of an infrared light from the stylus tip. An image processing algorithm analyzes the camera image to determine the stylus coordinates and move the mouse cursor accordingly. The algorithm was implemented using NI LabVIEW. Experimental results found average accuracy of 98.9% and latency of 0.28 seconds at a resolution of 800x600 pixels. This low-cost design could enable interactive whiteboard applications in education.
The document discusses several sensors found in mobile phones: proximity sensors, ambient light sensors, GPS, accelerometers, compasses, and gyroscopes. Proximity sensors detect when a phone is placed near the ear to turn off the screen. Ambient light sensors adjust screen brightness based on lighting. GPS uses satellites to determine location. Accelerometers and compasses detect orientation and rotation. Gyroscopes measure orientation based on angular momentum. The sensors enhance usability and functions like auto-rotating maps based on physical orientation.
This document describes a smart eye wearable device with mounted cameras and sensors. The smart eye glasses can serve as an artificial eye for blind people and can also be used for surveillance. It has features like motion detection, distance measurement, temperature sensing, and GPS. The hardware includes wearable glasses, an Arduino microcontroller, cameras, sensors. Stereo vision using two camera images is used to estimate depth and disparity maps. The smart eye was designed to help blind people navigate surroundings and has potential applications in defense and remote display.
iSphere is a revolutionary input device for modeling 3D geometries through hand manipulation. This device is equipped with capacitive sensors that can respond to the proximity of hand positions to 3D modeling systems.
Sensors are devices that measure physical quantities and convert them to signals that can be read by instruments or observers. Modern cellphones contain many sensors like microphones, cameras, GPS, touchscreens, accelerometers, and gyroscopes that have replaced the need for separate devices. The document discusses how common sensors like infrared sensors, accelerometers, gyroscopes, touchscreens, and cameras work, as well as where they are used. It notes that sensors have improved over time by becoming smaller, faster, better, and cheaper due to advances in processing technology and manufacturing.
The document discusses different types of sensors used for 3D digitization, including passive and active vision techniques. It describes synchronization circuit-based dual photocells that improve measurement stability and repeatability. Position sensitive detectors are discussed that can measure the position of a light spot in one or two dimensions on a sensor surface to acquire high-resolution 3D images. A proposed sensor architecture combines color and range sensing for applications like hand-held 3D cameras.
This document provides an overview of augmented reality including:
- The key differences between augmented reality and virtual reality, with augmented reality overlaying digital elements on the real world while maintaining a sense of the real environment.
- The main components of an augmented reality system including head-mounted displays, tracking systems like GPS, and mobile computing power.
- How video and optical see-through displays work to merge real and virtual scenes.
- Potential applications of augmented reality and some performance issues that need to be addressed.
This document discusses virtual reality and its types and applications. It defines virtual reality as a computer-generated immersive or wide field multi-sensory information which tracks users in real time. The main types discussed are immersive virtual reality, window on world virtual reality, and telepresence virtual reality. Applications mentioned include architecture, medicine, engineering and design, entertainment, training, and manufacturing. Advantages include creating realistic worlds and enabling experimentation, while disadvantages include high equipment costs and inability to fully replicate real world movement.
Working with Windows Phone sensors, gps and mapsMalin De Silva
This document discusses working with location services and device sensors in mobile applications. It covers getting location data through GPS and setting accuracy levels. It also discusses using the Bing Maps API and integrating maps into apps. Regarding sensors, it describes how to use the accelerometer, compass, gyroscope, inclinometer and other sensors to gather motion and orientation data and provide examples of applications. The document is intended to demonstrate how to implement these features in mobile apps.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document summarizes a research paper on developing a location-based tracking application for Android. It describes the application called "Route Tracker" which allows users to track their movement and route on a map. The summary discusses the technologies used to develop the application such as Android classes for maps, locations and overlays. It also analyzes the complexity of classes used and constraints of deploying location-based services. The conclusion states that implementing location-based services requires integrating Android APIs with external APIs like Google Maps.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
The document summarizes a voice-controlled robot called Home Butler that is designed to assist handicapped individuals. The robot takes voice commands, locates requested objects using image processing and a database, navigates to the object using SLAM and LIDAR sensors, identifies the object with camera vision and image matching, grabs the object using sensors and motors, and returns it to the user by retracing its path. The robot integrates LabVIEW for voice decoding, MATLAB for image processing, and a Raspberry Pi operating system to run the integrated software and databases.
The document describes a project to create a wireless mouse using a SunSPOT sensor. Three approaches were considered: 1) connecting the sensor to a conventional mouse; 2) using the sensor independently; 3) using the sensor and base station to communicate with the computer. Approach 3 was selected because resources were available. The sensor sends tilt data to the base station, which translates it to mouse movements. Testing showed the sensor mouse works fully wirelessly but is very sensitive, requiring controlled movement. It allows using a mouse from a distance without wires.
An autonomous mobile robot platform has been assembled to automatically capture images from viewpoints spaced a few inches apart throughout an environment. The robot is controlled through a software interface that allows for monitoring image capture progress and interactively controlling the robot's speed, navigation along predetermined pathways, and collision avoidance using sonar. Future work will involve deploying the robot to capture large, real-world environments like the Purdue Union Ballroom through millions of high-resolution images and precisely computing its location using a laser positioning system.
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
This document discusses various machine vision techniques used to estimate the self-position of mobile robots in industries. It describes techniques such as GPS, vision-based localization using cameras and image processing, 2D and 3D map-based localization, and memory-based localization using image autocorrelation. Vision-based techniques analyze camera images to detect landmarks, match detected landmarks to a database, and calculate the robot's position. Map-based localization uses 2D overhead maps captured by laser sensors or 3D models to localize the robot. Memory-based localization generates unique autocorrelation images from camera views and matches them to stored images to estimate the position.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
The document proposes a framework that uses intelligent mobile devices to enable indoor wireless location tracking, navigation, and mobile augmented reality (AR). It discusses using mobile devices equipped with inertial measurement units (IMU) and multi-touch screens to provide user feedback to correct positioning errors. The framework also uses mobile AR through device cameras to help navigate users in complex 3D indoor environments and provide interactive location-based services. A prototype system was developed to demonstrate the feasibility of the proposed application framework.
This paper describes a decision tree (DT) based pedometer algorithm and its implementation on
Android. The DT- based pedometer can classify 3 gait patterns, including walking on level
ground (WLG), up stairs (WUS) and down stairs (WDS). It can discard irrelevant motion and
count user’s steps accurately. The overall classification accuracy is 89.4%. Accelerometer,
gyroscope and magnetic field sensors are used in the device. When user puts his/her smart
phone into the pocket, the pedometer can automatically count steps of different gait patterns.
Two methods are tested to map the acceleration from mobile phone’s reference frame to the
direction of gravity. Two significant features are employed to classify different gait patterns.
The document discusses various sensors used in mobile phones. It describes proximity sensors which detect how close the phone is to the user's face and turn off the screen to save battery during calls. It also explains GPS sensors which track location using satellites, ambient light sensors which adjust screen brightness based on light levels, accelerometers which detect orientation changes, compass sensors which indicate direction using magnetism, gyroscopes which detect motion, and back-illuminated image sensors which improve low-light photography. These sensors power many smart features in phones and help differentiate them from conventional devices.
Iaetsd location-based services using autonomous gpsIaetsd Iaetsd
This document summarizes a research paper on developing an Android application using autonomous GPS to provide location-based services. The key points are:
1) It proposes an Android app that uses autonomous GPS (rather than cellular networks) to track a user's location in real-time. Autonomous GPS can find location faster and with lower cost compared to existing location-based systems.
2) The app is designed to find and tag services and people with their locations. It will display locations on a map using the Android location API and Google Maps API.
3) Autonomous GPS determines location directly from GPS satellites without needing wireless data, resulting in faster location fixes, lower cost, and less battery usage compared to assisted
The document discusses implementing geo-messaging on Android using Google Cloud Messaging (GCM) and Location Based Services APIs. It proposes an application that sends messages to users when they are near a specified location using geo-fencing. The application uses GCM to push messages from a server to devices and Location Services to detect when devices enter or exit geo-fenced areas to trigger message delivery. The document outlines the system architecture, registration process, geo-fencing implementation, and concludes discussing potential future applications of location-based services.
Android Phone has power to access or fetch data from remote location and provide various facilities to the user. Hence android applications have more and more demand because of its user friendly nature and its power of computation. Many tourist are having problem to search proper tourist places due to communication overhead or less facility of tourist guide. It is impractical to search each and every tourist place at every location. So in order to provide feasible as well as user friendly solution for this problem we develop an android application which will automatically recognize famous and nearby places and send notification to android phone. This application also provides weather recommendation feature which notifies the tourist about weather conditions of the destination before visiting it. All places are properly categorized and also with review or rating. The application also provides facility of vehicle mark to reach your vehicle after site visit. We are using Triangulation method with LBS as well as GPS to track the location of user. And as per his location, relevant list of tourist places will be send in the form of pop up notification.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Modern surveying techniques provide accurate measurements for civil engineering projects. Techniques like total stations, digital levels, GPS, and data collectors allow surveyors to measure horizontal and vertical angles as well as distances electronically. This improves accuracy and efficiency over traditional methods. Specific applications in civil engineering include road, tunnel, and bridge alignment, infrastructure mapping for urban planning, and construction site layout and monitoring. The summarized data can be imported into CAD software for modeling and analysis. Overall, modern surveying techniques enhance accuracy and productivity for civil engineering design and construction.
Range Free Localization using Expected Hop Progress in Wireless Sensor NetworkAM Publications
Wireless sensor network (WSN) combines the concept of wireless network with sensors. Wireless Sensor Networks
have been proposed for a multitude of location-dependent applications. Localization (location estimation) capability is
essential in most wireless sensor network applications. In environmental monitoring applications such as animal habitat
monitoring, bush fire surveillance, water quality monitoring and precision agriculture, the measurement data are
meaningless without an accurate knowledge of the location from where the data are obtained. Finding position without the
aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use
due to power, form factor or line of sight conditions. So here we are going to used DV-Hop algorithm, i.e. distance vector
routing algorithm for finding the position of sensor. Here we summarizes the performance evaluation criteria of the
wireless sensor network and algorithms, classification methods, and highlights the principles and characteristics of the
algorithm and system representative of the field in recent years, and several algorithms simulation and analysis.
A one decade survey of autonomous mobile robot systems IJECEIAES
Recently, autonomous mobile robots have gained popularity in the modern world due to their relevance technology and application in real world situations. The global market for mobile robots will grow significantly over the next 20 years. Autonomous mobile robots are found in many fields including institutions, industry, business, hospitals, agriculture as well as private households for the purpose of improving day-to-day activities and services. The development of technology has increased in the requirements for mobile robots because of the services and tasks provided by them, like rescue and research operations, surveillance, carry heavy objects and so on. Researchers have conducted many works on the importance of robots, their uses, and problems. This article aims to analyze the control system of mobile robots and the way robots have the ability of moving in real-world to achieve their goals. It should be noted that there are several technological directions in a mobile robot industry. It must be observed and integrated so that the robot functions properly: Navigation systems, localization systems, detection systems (sensors) along with motion and kinematics and dynamics systems. All such systems should be united through a control unit; thus, the mission or work of mobile robots are conducted with reliability.
The document describes an algorithm that generates maps of an unknown environment using a Pioneer 2DX mobile robot in a simulated Player/Stage environment. The algorithm uses a laser sensor on the robot to detect obstacles and walls as it travels through a virtual representation of the Autonomous University of Yucatan campus. Maps created with the algorithm accurately represent the environment in a way that allows the robot to navigate. The algorithm calculates coordinate positions of detected objects using trigonometry based on the laser sensor's distance and angle measurements. The mapping is tested through simulations of the robot traveling autonomously in the virtual environment.
Our uDirect technology can accurately estimate the facing direction of a mobile phone user, independent of device position and orientation and without any user intervention.
Knowing which way a user is facing can provide valuable information for the provision of mobile services and applications. The proposed technology utilises inertial sensors that are readily available in mobile consumer devices (e.g. smart phones). While providing high accuracy, the solution is able to cope with arbitrary wearing positions and orientations of the mobile consumer device, making it suitable for use in every-day life situations. This is achieved by an estimation of the user orientation with respect to the reference frame of both sensing module, and earth coordinate system.
Total station and its application to civil engineeringTushar Dholakia
Total stations are surveying instruments that combine an electronic theodolite, electronic distance meter, and on-board computer. They allow users to measure horizontal and vertical angles as well as slope distances to calculate coordinates. Modern total stations can store thousands of data points, perform computations, and transfer data remotely via memory cards or wireless connections. They have largely replaced standalone theodolites and distance meters due to greater accuracy, automation, and data processing capabilities. Total stations find wide application in civil engineering, mining, accident reconstruction, and other fields requiring precise spatial measurements and positioning.
This document describes a proposed system to implement geo-messaging using location-based services and Google Cloud Messaging APIs on Android. The system would send messages to students when they enter the vicinity of their college. It would use Geo-Fencing API to set the boundary for the college location and track students' entry and exit. Google Cloud Messaging API would be used to push messages from the server to students' devices.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Android application to locate and track mobile phones(aaltm) an implementati...eSAT Journals
Abstract
With the advancement of technology and modern science, people are expecting information about the location of any object for
tracking purposes. GPS is a system which has been implemented and is accessible without any restriction. Having the facility of
GPS, we need a GPS device to calculate the location from the information taken from GPS. This paper presents a study on
Location Based Services and is directed on locating and tracking of Android devices. In this implementation I am using Global
Positioning System (GPS) and Web Services, and android smart phone device since Android devices have a built-in feature of
GPS service. Some latest demanding tools and technology used for this implementation are Java, AVD, WAMP etc.
Keywords: Location Based Services; Global Positioning System (GPS); location; path
Design and Implementation of a GPS based Personal Tracking SystemSudhanshu Janwadkar
Design and Implementation of a GPS based Personal Tracking System
Tracking based applications have been quite popular in recent times. Most of them have been limited to commercial applications such as vehicular tracking (e.g tracking of a train etc). However, not much work has been done towards design of a personal tracking system. Our Research work is an attempt to design such personal tracking system. In this paper, we have shared glimpses of our research work.
The objective of our research project is to design & develop a system which is capable of tracking and monitoring a person, object or any other asset of importance (called as target). The system uses GPS to determine the exact position of the target. The target is aided with a compact handheld device which consists of a GPS receiver and GSM modem. GPS receiver obtains location coordinates (viz. Latitude & Longitude) from GPS satellites. The location information in NMEA format is decoded, formatted and sent to control station, through a GSM modem. Due to use of Open CPU development platform, no external Microcontroller is required, with additional advantage of compact size product, reduced design & development time and reduced cost.
Thus, the proposed system is able to track the accurate location of target. This system finds applications in tracking old-age people, tracking animals in forest, tracking delivery of goods etc. Our final designed system is a small-size compact l.S"X3.7S" Tracker system with position accuracy error <30m (100 feet).
Techniques that Facebook use to Analyze and QuerySocial GraphsHaneen Droubi
Facebook uses various techniques to analyze and query social graphs. These include discovering cohesive subgraphs through approaches like finding large, dense subgraphs and top-k dense subgraphs. Memcache is used as a distributed memory caching system to speed up dynamic databases. Typeahead allows searching as a user types through prefix matching. Unicorn is Facebook's system for searching the social graph through an inverted index that represents it as adjacency lists. Systems like Hive, Scuba and Dremel are used to analyze data at large scales.
دليل المعلم لشرح وتركيب جميع دارات الصف التاسع واجابات اسئلة الكتابHaneen Droubi
دليل المعلم لشرح وتركيب جميع دارات الصف التاسع واجابات اسئلة الكتاب
شرح دارات الكتاب - وتركيبها - واجابة اسئلة الكتاب كاملة
الدليل الاقوى والاوضح لمادة التكنولوجيا في الصف التاسع
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Electronic tour guide
1. INTERNATIONAL CONFERENCE ON SMART CITIES SOLUTIONS
اﻟﻤﺆﺗﻤﺮوﺣﺪود آﻓﺎق :اﻟﺬﻛﯿﺔ اﻟﻤﺪن ﺣﻮل اﻟﺪوﻟﻲ
1
Electronic Tour Guide
Haneen Fuad Droubi
1
Master Student , Faculty of Engineering and Technology ,
Birzeit University , haneendroubi@gmail.com
Abstract— Many visitors to Palestine and other countries need information about the buildings specially Archaeological
buildings while they passing beside it at their round, For this information they should ask Tour Guide or peoples, at many
times they didn't find answers for their questions, this paper will propose System (android application) to solve this problem.
The only thing should the visitor do is to direct his mobile towards the building to get the information about it. The applica-
tion will work as compass to determine the mobile direction using the Orientation sensor, in addition it will Determine the
longitude and latitude after accessing location APIs, then use the direction and GPS location to determine the building that
user direct his mobile towards using a new algorithm this paper will propose.
Index Terms— GPS , Sensors , Tourist guide system, Android , Algorithm , Buildings.
I INTRODUCTION
Smartphones are growing exponentially and become
popular to offer greater services to users, The mobile tele-
phone is more than just making calls. It allows accessing
several applications and services via the internet connection
or by building stand-alone applications.
Mobile tourism is a term that starts to appear in the last
two decades. It involves using mobile device as electronic
tourist guide [1]
. Mobile devices present many unique cha-
racteristics that make their use as electronic tourist guides
particularly attractive, such as ubiquity and convenience;
positioning: by employing technologies like GPS, users may
receive and access information and services specific to their
location.[2] While much of the underlying technology is al-
ready available, there are still open challenges with respect
to design, usability, portability, functionality and implemen-
tation aspects.
Most tourism solutions already existing, the applications
give information for the location the turism stand on. for
examples, "Android city tour guide system based on Web
service for Mumbai" focus on how to query information for
hotel, scenery, restaurant, traffic, schools and many other
sujested solutions focus on the same idea [3]
. A wireless In-
ternet phone, jointly developed by a number of enterprises
from the United States and Japan, uses the electronic com-
pass and the GPS chip to determine the specific location of
the user, and the phone will show the specific location on the
phone screen [4]. Zhejiang University carried out a similar
study, and they have developed an intelligent tour guide sys-
tem based on the Android platform. Google API is applied to
access Google Maps in the tour guide system. After calculat-
ing the right location of visitors, the mobile phone program
will display information to its users, with the GPS chip with-
in the tour guide platform to determine the location of the
user [5]. This paper propose a new algorthim to detect
where user direct his mobile and give him the informa-
tion.
The Android platform supports three broad categories of
sensors: that measure motion, orientation, and various envi-
ronmental conditions. the Most Android devices have many
types of built-in sensors for those categories some of it are
Hardware and others are Software. You can access sensors
available on the device and acquire raw sensor data, by us-
ing the Android sensor framework which is a part of
the android.hardware package and includes (SensorManager,
Sensor, SensorEvent, SensorEventListener) classes which
the proposed system will use.[6]
.
The Android sensor framework uses a standard 3-axis
coordinate system to express data values. The behavior of
Figure 1. Coordinate system (relative to a device)
that's used by the Sensor API. [8]
2. Haneen Fuad Droubi / Electronic Tour Guide (2016)
2
this coordinate system is the same as the behavior of the
OpenGL coordinate system, in other words the axis are not
swapped when the device's screen orientation changes. The
coordinate system is defined relative to the device's screen
when the device is held in its default orientation (Figure 1).
The X axis is horizontal and points to the right, the Y axis is
vertical and points up and the Z axis points toward the out-
side of the screen face. Coordinates behind the screen have
negative Z values. [7]
The Proposed System need TYPE_ORIENTATION sen-
sor which is a software sensor measures degrees of rotation
that a device makes around all three physical axis (x, y, z) to
determine device position, this sensor is available in all an-
droid platforms (Android1.5 (API Level 3) to Android
4.0 (API Level 14)) and derives its data from the accelero-
meter and the geomagnetic field sensor to determine a de-
vice's position relative to the magnetic North Pole. The
orientation sensor return multi-dimensional arrays of sensor
values for each SensorEvent provides azimuth (yaw), pitch,
and roll values as shown in Table (1).[8]
The proposed Sys-
tem will use Azimuth values.
SensorEvent.values[0] Azimuth (angle around the z-axis).
SensorEvent.values[1] Pitch (angle around the x-axis).
SensorEvent.values[2] Roll (angle around the y-axis).
Android has two types of location strategies GPS and
Android's Network Location Provider. The developer can
use both GPS and the Network Location Provider, or just
one. Although Network Location Provider determines user
location using cell tower and Wi-Fi signals, providing loca-
tion information in a way that works indoors and outdoors
but GPS is most accurate, as the accuracy is the important
thing in our case we will use GPS.[9]
GPS (Global Positioning System ) is a space-
based satellite navigation system that provides location and
time information in all weather conditions using longitudes
and latitudes. [11]
Latitude and longitude is the most common grid system
used for navigation. It will allow you to pinpoint your loca-
tion with a high degree of accuracy. Latitude is the angular
distance measured north and south of the Equator. The Equa-
tor is 0 degrees. As you go north of the equator, the latitude
increases all the way up to 90 degrees at the north pole..[9]
Longitude works the same way, It is angular distance
measured east and west of the Prime Meridian. The prime
meridian is 0 degrees longitude. As you go east from the
prime meridian, the longitude increases to 180 degrees. [9]
II PROPOSAL SYSTEM
The Proposed System based on two basic rules. The First
one, is determine the mobile direction which is a device In-
clination angle relative to the magnetic North Pole after ac-
cessing TYPE_ORIENTATION sensor. The second basic
rule is determine the longitude and latitude after accessing
location APIs in the package android.location. Then com-
bine direction and GPS location to determine the building
that user directs his mobile toward using the idea of dividing
the map into squares and re-arrange buildings relative to the
user location, depending on the fact that as you go north of
the equator, the latitude increases and as you go east
from the prime meridian, the longitude increases. Then
Compute the priority of buildings relative to user location,
by computing the angle between the longitude of the user
location and the line between the user location point and the
building center point. This paper will use Birzeit University
map as a case to explain the idea of this System.
A Determine The Mobile Direction
The Proposed System will determine the mobile direction
using the Orientation Sensor . The orientation sensor lets
you monitoring the position of a device relative to the earth's
frame of reference (specifically, magnetic north).
The orientation sensor derives its data by using a device's
geomagnetic field sensor in combination with a device's
accelerometer. Using these two hardware sensors, an orien-
tation sensor provides data for the following three dimen-
sions: [7]
• Azimuth (degrees of rotation around the z axis).
This is the angle between magnetic north and the
device's y axis. For example, if the device's y axis
is aligned with magnetic north this value is 0, and if
the device's y axis is pointing south this value is
180. Likewise, when the y axis is pointing east this
value is 90 and when it is pointing west this value
is 270.
• Pitch (degrees of rotation around the x axis). This
value is positive when the positive z axis rotates
toward the positive y axis, and it is negative when
the positive z axis rotates toward the negative y
axis. The range of values is 180 degrees to -180 de-
grees.
• Roll (degrees of rotation around the y axis). This
value is positive when the positive z axis rotates
toward the positive x axis, and it is negative when
the positive z axis rotates toward the negative x
axis. The range of values is 90 degrees to -90 de-
grees.
1: imports (Sensor, SensorEvent ,SensorManager , SensorListener...)
2: implements SensorEventListener
3: //Initialize device sensor manager
SensorManager mSensorManager;
4: onCreate() {
Table 1
TYPE_ORIENTATION Matrix. [7]
3. Haneen Fuad Droubi / Electronic Tour Guide (2016)
3
// Create an instance of the SensorManager class, Get a reference
to the sensor service
mSensorManager=(SensorManager)getSystemService(SENSOR_SERVICE);
}
5: onResume() {
//register listener for the system orientation sensor
mSensorManager.registerListener(this,
mSensorManager.getDefaultSensor (Sensor.TYPE_ORIENTATION),
SensorManager.SENSOR_DELAY_GAME);
}
6: onPause() {
//unregister Listener to stop the listener and save battery
mSensorManager.unregisterListener(this);
}
7: onSensorChanged(SensorEvent event) {
// get the angle around the z-axis rotated
degree = event.values[0];
}
In the first step; the Proposed System has import SensorMa-
nager class which lets you access the device's sensors, to get
an instance of this class by call-
ing Context.getSystemService() with the argument SEN-
SOR_SERVICE in step 4 in create() method.[5]
To monitor raw sensor data we need to implement two call-
back methods( onAccuracyChanged() and onSensor-
Changed() ) that are exposed through
theSensorEventListener interface, so The Proposed System
has implemented this interface in step 2. The Proposed Sys-
tem has used onSensorChanged() method which providing
us with a SensorEvent object which contains information
about the new sensor data, including: the accuracy of the
data, the sensor that generated the data, the timestamp at
which the data was generated, and the new data that the sen-
sor recorded.
The data delay controls the interval at which sensor events
are sent to the application via
the onSensorChanged() callback method. The application
use SENSOR_DELAY_GAME (20,000 microsecond delay)
as an argument to registerListener in onResume() method in
step5. As we registered listener we should unregister the
sensor event listener in onPause() method, step(6). [6]
If we
didn't unregister event the battery can drain in just a few
hours because the system will not disable sensors automati-
cally when the screen turns off. [10]
As mentioned previously an orientation sensor provides data
for three dimensions as matrix (Azimuth, Roll, Pitch).
The proposed application just need Azimuth (value[0]) to
determine the direction of the mobile, pseudo code(1) in step
7. Value[0] is the angle between magnetic north and the de-
vice y axis. If the device directed to the north y axis will
aligned with magnetic north then the value[0] = 0, and if the
device y axis is pointing south this value is 180. Likewise,
when the y axis is pointing east this value is 90 and when it
is pointing west this value is 270, Figure (3).
B Mobile Location
The Proposed System need to Determine the longitude and
latitude after accessing location APIs in the package andro-
id.location, pseudo code (2).
In order to receive location updates from NET-
WORK_PROVIDER or GPS_PROVIDER, we must request
user permission by declaring either
the ACCESS_COARSE_LOCATION or ACCESS_FINE_L
OCATION permission, respectively, in our Android manifest
file. The proposed System uses GPS_PROVIDER so it's
need ACCESS_FINE_LOCATION permission :
<uses-permission andro-
id:name="android.permission.ACCESS_FINE_LOCATION"
/> [3]
1: imports (Location , LocationListener ,LocationManager )
2: // Create class and implements with LocationListener
implements LocationListener
3: private LocationManager locationManager
4: onCreate()
// Get GPS location service, LocationManager object
locationManager=(LocationManager)
getSystemService(Context.LOCATION_SERVICE);
// CAL METHOD requestLocationUpdates
locationManager.requestLocationUpdates
( LocationManager.GPS_PROVIDER,
3000, // 3 sec
10, this);
//called periodically after each 3 sec
}
5: onLocationChanged(Location location) {
pseudo code 1. Mobile Direction
clockwise
E = 90 degree
N = 0 degree
W = 270 degree
S = 180 degree
Figure 3. value[0] relative to android mobile.
4. Haneen Fuad Droubi / Electronic Tour Guide (2016)
4
Latitude: location.getLatitude();
Longitude: location.getLongitude();
}
6: onProviderDisabled() {
// Called when User off GPS
show "GPS turned off "
}
7: onProviderEnabled() {
// Called when User on GPS
show "GPS turned on "}}
Getting user location in Android works by means of call-
back. The First parameter of requestLocationUpdates() is
the name of the provider with which to register, in our case it
is GPS_PROVIDER. The Second parameter indicate to the
minimum time interval for notifications in milliseconds, this
field is only used as a hint to conserve power, and actual
time between location updates may be greater or lesser than
this value. The Third parameter is the minimum distance
interval for notifications, in meters .
C Combine location and Direction
The idea of detecting the building which the user directed
his mobile toward has three steps: The first one, is dividing
the map into squares. The second step, is rearrange buildings
relative to the user location. The third step, is compute the
priority of buildings locations relative to user location by
computing the angle between the longitude of the user loca-
tion and the line between the user location point and the
building center point.
Dividing The Map Into Squares
The First Step in The Proposed System is dividing the map
into squares, In our case (Birzeit university map ) we don’t
need more than 25 squares. The map divided into 25
squares, 5 for each row. the application automatically will
make a 5*5 matrix of numbers of those squares that means
there is 6 longitudes and 6 latitudes as input. The application
read those coordinates from text file and store it in 2D ma-
trix then use it to detect in which square the user where. As
mentioned previous the application access GPS to detect
longitude and latitude for the point the user stands on. The
application will compare the longitude of user location with
the 6 longitudes of squares to detect the column, and com-
pare the latitude of the user point with the 6 latitudes of
squares to detect the row of the matrix, Figure (4). Now, the
application has column and row that means it can find
square number like this square = area[row][column]. After
that the application retrieve all buildings in that square.
Rearrange Buildings relative to the user location
The Second Step in The Proposed System is re-arrange
buildings relative to the user location depending on the fact
that as you go north of the equator, the latitude increases all
the way up to 90 degrees at the north pole, and as you go
east from the prime meridian, the longitude increases to 180
degrees. The Proposed System make the decision by com-
paring the longitude of user location with the building center
longitude to determine if the building in the east or the west
from user location, if the longitude of user location greater
than longitude of the building center then the building lo-
cated at west from the user else it is at east direction from
him. In addition, The proposed System compare the latitude
of user location with the building center latitude to deter-
mine if the building in the North or the South from user lo-
cation, if the latitude of user location greater than latitude of
building center then the building located at South from the
user else it is at North direction from him. That means each
building has four directions relative to the user location
(South West (SW) , North West (NW) , South East (SE) ,
North East (NE) ). Look at Figure (5) you can see that build-
ing number 28 and building number 25 are in direction SE
relative to the user location. The application uses the degree
that magnetic sensor return to determine the buildings which
it's direction is the same direction of the orientation sensor
degree :
0 – 90 = NE
90 – 180 = SE
180 – 270 = SW
270 – 360 = NW
pseudo code 2. GPS Location
Lat1
Lat2
Lat3
Lat4
Lat5
Lat6
Long1 long2 long3 long4 long5 long6
Figure 4. BZU campus
1 2 3 4 5
7 8
25
9 ..
5. Haneen Fuad Droubi / Electronic Tour Guide (2016)
5
Compute The Priority Of Buildings Relative To User
Location
The Third step in The Proposed System is Compute the
priority of buildings which the Proposed System find in pre-
vious step relative to user location by computing the angle
between the longitude of the user location, and the line be-
tween the user location point and the building center point,
Figure (6). The Proposed System compute x = |MyLong –
Long1| and y = |MyLat – lat1| for each building then com-
pute α = tan -1 (x/y). After that, The proposed system will
compute β (the angle between y axes of user mobile and
longitude of user location) from degree (the angel returned
from orientation sensor) in four cases :
• If the direction of the building in previous step is
NE then β = degree.
• If the direction of the building in previous step is
SE then β = 180 - degree, Figure (6).
• If the direction of the building in previous step is
SW then β = degree - 180.
• If the direction of the building in previous step is
NW then β = 360 - degree.
Then The proposed system compute the difference be-
tween β and α (difference = | β – α | ) for each building, after
that the proposed system take the minimum value from all
buildings differences values .Look at Figure (6) in this case
the direction is SE, the system will choose building 28 be-
cause its difference value smaller than difference value of
building 25.
As we know any application need some inputs to give re-
sults, in our application the input is a text file contain the
longitude and latitude for each building center, in addition to
longitude and latitude for square borders and the buildings
numbers in each square for any map. The application makes
its processing on those inputs as we mentioned previously,
the outputs will be the information of the building the user
directed his mobile towards, Figure (7).
The application is compatible with all android platforms
from Android 1.5 and up.
III CONCLUSION
The paper has proposed a new algorithm to detect
where the visitor direct his mobile. The Proposed System
based on two basic rules. The First one, is determine the
mobile direction which is a device Inclination angle relative
to the magnetic North Pole after accessing
TYPE_ORIENTATION sensor. The second basic rule is
determine the longitude and latitude after accessing location
APIs in the package android.location. Then combine direc-
tion and GPS location to determine the building that user
directs his mobile toward using the idea of dividing the map
into squares and re-arrange buildings relative to the user
location, depending on the fact that as you go north of the
Figure 5. Re-arrange Buildings
Figure 6. Compute The Priority Of Buildings.
(a (b
1- Find The angle
between magnetic
north and the device's
y axis using orienta-
tion sensor.
2-Find user location
(Longitude and
Latitude) from GPS.
3-Determine the user
in which square &
Rearrange buildings
relative to the user
Location.
4- Matching the angle
from step 1 with the
building locations
from step 3 .
Longitude
and latitude
Longitudes
and Lati-
tudes for
Information
about the
building the
user directed
his mobile
towards.
Set marker
on that
building
Inputs Processing Output
Figure 7. IPO Diagram for the application
6. Haneen Fuad Droubi / Electronic Tour Guide (2016)
6
equator, the latitude increases and as you go east from
the prime meridian, the longitude increases. Then Com-
pute the priority of buildings relative to user location, by
computing the angle between the longitude of the user loca-
tion and the line between the user location point and the
building center point. The application is compatible with all
android platforms from Android 1.5 and up. The algorithm
of this Proposed System can be used for OS operating sys-
tem. Until now this application needs inputs as text file, in
future the only thing it is need is the map (any map) in GIS
format then it will use the data from this map. The algorithm
of this Proposed System can be used for OS operating sys-
tem.
The magnetic sensor not working accurately if there is a
magnetic field near the android device (mobile or tablet ), so
when you use this application you should sure there is no
magnetic in 10 cm around your device.
ACKNOWLEDGMENT
I take this opportunity to express my gratitude and deep re-
gards to Eng. Abed Alkareem Alfanni for his encourage-
ment.
REFERENCES
[1] M. Kenteris, D. Gavalas and D. Economou, “An in-
novative mobile electronic tourist guide application”,
Personal Ubiquitous Comput, vol. 13, (2009), pp. 103-
118.
[2] Varshney U (2003) Issues, requirements and support for
locationintensive mobile commerce applications. Int J
Mob Commun 1(3):247–263
[3] Pawar, Lalita R., and Sarvesh S. Patwardhan. "Problems
& Suggestions for Android City Tour Guide System
Based on Web Services for Mumbai."
[4] Li Yu-hua, “The present competitive situation and im-
prove measures of Chinese tourism and improve coun-
termeasures”. Henan social science, Vol. 18, No. 5,
2010, pp.229-231.
[5] Ma Bin, Di min, Zhao Liao-ying. ”Intelligent visitors
management system based on ARM9 and nRF9E5”.
Journal of Mechanical & Electrical Engineering ,
Vol.26, No. 12, 2009, pp. 65-68.
[6] Milette, Greg, and Adam Stroud. Professional Android
sensor programming. John Wiley & Sons, 2012.
[7] Official Android Developer’s site (Electronic) Availa-
ble:http://developer.android.com/guide/topics/sensors/se
nsors_position.html 2014-12-20
[8] Official Android Developer’s site (Electronic) Availa-
ble:http://developer.android.com/guide/topics/location/s
trategies.html#Challenges 2014-12-22.
[9] Kaplan, Elliott D., and Christopher J. Hegarty,
eds. Understanding GPS: principles and applications.
Artech house, 2005.
[10]Official Android Developer’s site (Electronic) Availa-
ble:http://developer.android.com/reference/android/hard
ware/SensorManager.html 2014-12-20
[11]Shala, Ubejd, and Angel Rodriguez. "Indoor positioning
using sensor-fusion in android devices." (2011).