Augmented reality overlays virtual content on the physical world, displaying location-based information more efficiently. Tracking is a trace detail of the location recorded, either by taking a reading based on a set time interval, set distance, and change in direction by more than a certain angle, or a combination of these. Tracking is divided into two types, outdoor tracking, and indoor tracking. Augmented reality added value and information to tracking applications, whether indoor or outdoor. Recently, outdoor tracking by the global positioning system (GPS) became an essential component in navigation applications. However, indoor tracking is still challenging in the augmented reality field. Most augmented reality indoor tracking use optical and sensor-based tracking, such as markers, beacons, Raspberry Pi, and
route planner modules for the building. With the technology enhancement, indoor tracking started to rely on Wi-Fi, Bluetooth, geographic information system, radiofrequency identification, and sensor chip technologies. Our paper describes the recent augmented reality tracking techniques from 2011 to 2021. The paper compares image detection algorithms with various communication technologies, disscusing the advantages and the limitations of each technology.
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...IAESIJAI
The recent trend in location-based services has led to a proliferation of studies in indoor positioning technology. Wi-Fi received signal strength indicator (RSSI) Fingerprinting and pedestrian dead reckoning (PDR) are the two best representatives from both approaches. This research proposed a genetic algorithm to combine Wi-Fi Fingerprinting and PDR. By taking advantage of PDR and genetic algorithm, we only need to collect a limited number of points for the fingerprint dataset with known coordinates, then target trajectories' position can be estimated with high accuracy. Results from our experiments and simulations have shown that even in the scenario of noisy inertial measurement unit (IMU) sensors data, using RSSI measurements and the coordinate of 8 points, our proposed method was able to achieve 1.589 meters of average distance error which is 34.4 percent lower than the conventional Fingerprinting method.
With the rapid development of smartphone industry,
various positioning-enabled sensors such as GPS receivers,
accelerometers, gyroscopes, digital compasses, cameras, WiFi and Bluetooth have been built in smartphones for
communication, entertainment and location-based services.
Smartphone users can get their locations fixed according to
the function of GPS receiver.
Wi-Fi fingerprinting-based floor detection using adaptive scaling and weighte...CSITiaesprime
In practical applications, accurate floor determination in multi-building/floor environments is particularly useful and plays an increasingly crucial role in the performance of location-based services. An accurate and robust building and floor detection can reduce the location search space and ameliorate the positioning and wayfinding accuracy. As an efficient solution, this paper proposes a floor identification method that exploits statistical properties of wireless access point propagated signals to exponent received signal strength (RSS) in the radio map. Then, using single-layer extreme learning machine-weighted autoencoder (ELM-WAE) main feature extraction and dimensional reduction is implemented. Finally, ELM based classifier is trained over a new feature space to determine floor level. For the efficiency evaluation of our proposed model, we utilized three different datasets captured in the real scenarios. The evaluation result shows that the proposed model can achieve state-of-art performance and improve the accuracy of floor detection compared with multiple recent techniques. In this way, the floor level can be identified with 97.30%, 95.32%, and 96.39% on UJIIndoorLoc, Tampere, and UTSIndoorLoc datasets, respectively.
With the rapid development of
smartphone industry, various positioning-enabled
sensors such as GPS receivers, accelerometers,
gyroscopes, digital compasses, cameras, Wi-Fi and
Bluetooth have been built in smartphones for
communication, entertainment and location-based
services. Smartphone users can get their locations
fixed according to the function of GPS receiver.
Abstract: Wireless location finding is one of the key technologies for wireless sensor networks. GPS is the technology used but it can be used for the outdoor location. When we deal with the indoor locations GPS does not work. Indoor locations include buildings like supermarkets, big malls, parking, universities, and locations under the same roof. In these areas the accuracy of the GPS location is greatly reduced. Location showed on the map in not correct when the GPS is used under the indoor environments. But for the indoor localization it requires the higher accuracy sp GPS is not feasible for the current view. And also when the GPS is used in the mobile device it consumes a lot of the mobile battery to run the application which causes the drainage of the mobile battery within some hours. So to find out the accurate location for indoor environment we use the RSSI based trilateral localization algorithm. The algorithm has the low cost and the algorithm does not require any additional hardware support and moreover the algorithm is easy to understand. The algorithm consumes very less battery as compared to the battery consumption of the GPS. Because of these this algorithm has become the mainstream localization algorithm in the wireless sensor networks. With the development of the wireless sensor networks and the smart devices the WIFI access points are also increasing. The mobile smart devices detect three or more known WIFI hotspots positions. And using the values from the WIFI routers it calculates the current location of the mobile device. In this paper we have proposed a system so that we can find out the exact location of the mobile device under the indoor environment and can navigate to the destination using the navigation function and also can enable the low consumption of the smart mobile battery for the tracking purpose.
Goals:
1. Useful at the places where GPS cannot work
2. Reduces the battery consumption
3. Routers are used.
4. Provides the path as well as the information of the location as per the requirement of user.
Engfi Gate: An Indoor Guidance System using Marker-based Cyber-Physical Augme...IJECEIAES
A guidance system is needed when freshmen explore their new building environment. With the advancements of mobile technologies, a guidance system using mobile computing devices such as mobile phones or tablets could aid freshmen in locating the desired destination with ease. The proposed system consists of three main subsystems: the marker-based cyberphysical interaction (CPI) system, the indoor positioning (IP) system, and the augmented-reality (AR) system. With the help of visible markers and invisible markers, the CPI system allows the users to do interactions between the physical and cyber environments; the IP system produces accurate user position information; the AR system provides the users with good user experiences. An Android application, named Engfi Gate, is developed to realize the system design in the test environment. This paper also shows the comparisons of the proposed system with other related systems. Furthermore, the design architecture of Engfi Gate system can be used in other locationbased applications.
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...IAESIJAI
The recent trend in location-based services has led to a proliferation of studies in indoor positioning technology. Wi-Fi received signal strength indicator (RSSI) Fingerprinting and pedestrian dead reckoning (PDR) are the two best representatives from both approaches. This research proposed a genetic algorithm to combine Wi-Fi Fingerprinting and PDR. By taking advantage of PDR and genetic algorithm, we only need to collect a limited number of points for the fingerprint dataset with known coordinates, then target trajectories' position can be estimated with high accuracy. Results from our experiments and simulations have shown that even in the scenario of noisy inertial measurement unit (IMU) sensors data, using RSSI measurements and the coordinate of 8 points, our proposed method was able to achieve 1.589 meters of average distance error which is 34.4 percent lower than the conventional Fingerprinting method.
With the rapid development of smartphone industry,
various positioning-enabled sensors such as GPS receivers,
accelerometers, gyroscopes, digital compasses, cameras, WiFi and Bluetooth have been built in smartphones for
communication, entertainment and location-based services.
Smartphone users can get their locations fixed according to
the function of GPS receiver.
Wi-Fi fingerprinting-based floor detection using adaptive scaling and weighte...CSITiaesprime
In practical applications, accurate floor determination in multi-building/floor environments is particularly useful and plays an increasingly crucial role in the performance of location-based services. An accurate and robust building and floor detection can reduce the location search space and ameliorate the positioning and wayfinding accuracy. As an efficient solution, this paper proposes a floor identification method that exploits statistical properties of wireless access point propagated signals to exponent received signal strength (RSS) in the radio map. Then, using single-layer extreme learning machine-weighted autoencoder (ELM-WAE) main feature extraction and dimensional reduction is implemented. Finally, ELM based classifier is trained over a new feature space to determine floor level. For the efficiency evaluation of our proposed model, we utilized three different datasets captured in the real scenarios. The evaluation result shows that the proposed model can achieve state-of-art performance and improve the accuracy of floor detection compared with multiple recent techniques. In this way, the floor level can be identified with 97.30%, 95.32%, and 96.39% on UJIIndoorLoc, Tampere, and UTSIndoorLoc datasets, respectively.
With the rapid development of
smartphone industry, various positioning-enabled
sensors such as GPS receivers, accelerometers,
gyroscopes, digital compasses, cameras, Wi-Fi and
Bluetooth have been built in smartphones for
communication, entertainment and location-based
services. Smartphone users can get their locations
fixed according to the function of GPS receiver.
Abstract: Wireless location finding is one of the key technologies for wireless sensor networks. GPS is the technology used but it can be used for the outdoor location. When we deal with the indoor locations GPS does not work. Indoor locations include buildings like supermarkets, big malls, parking, universities, and locations under the same roof. In these areas the accuracy of the GPS location is greatly reduced. Location showed on the map in not correct when the GPS is used under the indoor environments. But for the indoor localization it requires the higher accuracy sp GPS is not feasible for the current view. And also when the GPS is used in the mobile device it consumes a lot of the mobile battery to run the application which causes the drainage of the mobile battery within some hours. So to find out the accurate location for indoor environment we use the RSSI based trilateral localization algorithm. The algorithm has the low cost and the algorithm does not require any additional hardware support and moreover the algorithm is easy to understand. The algorithm consumes very less battery as compared to the battery consumption of the GPS. Because of these this algorithm has become the mainstream localization algorithm in the wireless sensor networks. With the development of the wireless sensor networks and the smart devices the WIFI access points are also increasing. The mobile smart devices detect three or more known WIFI hotspots positions. And using the values from the WIFI routers it calculates the current location of the mobile device. In this paper we have proposed a system so that we can find out the exact location of the mobile device under the indoor environment and can navigate to the destination using the navigation function and also can enable the low consumption of the smart mobile battery for the tracking purpose.
Goals:
1. Useful at the places where GPS cannot work
2. Reduces the battery consumption
3. Routers are used.
4. Provides the path as well as the information of the location as per the requirement of user.
Engfi Gate: An Indoor Guidance System using Marker-based Cyber-Physical Augme...IJECEIAES
A guidance system is needed when freshmen explore their new building environment. With the advancements of mobile technologies, a guidance system using mobile computing devices such as mobile phones or tablets could aid freshmen in locating the desired destination with ease. The proposed system consists of three main subsystems: the marker-based cyberphysical interaction (CPI) system, the indoor positioning (IP) system, and the augmented-reality (AR) system. With the help of visible markers and invisible markers, the CPI system allows the users to do interactions between the physical and cyber environments; the IP system produces accurate user position information; the AR system provides the users with good user experiences. An Android application, named Engfi Gate, is developed to realize the system design in the test environment. This paper also shows the comparisons of the proposed system with other related systems. Furthermore, the design architecture of Engfi Gate system can be used in other locationbased applications.
Software engineering model based smart indoor localization system using deep-...TELKOMNIKA JOURNAL
During the last few years, the allocation of objects or persons inside a specific
building is highly required. It is well known that the global positioning system
(GPS) cannot be adopted in indoor environment due to the lack of signals.
Therefore, it is important to discover a new way that works inside.
The proposed system uses the deep learning techniques to classify places based
on capturing images. The proposed system contains two parts: software part
and hardware part. The software part is built based on software engineering
model to increase the reliability, flexibility, and scalability. In addition,
this part, the dataset is collected using the Raspberry Pi III camera as training
and validating data set. This dataset is used as an input to the proposed deep
learning model. In the hardware part, Raspberry Pi III is used for loading
the proposed model and producing prediction results and a camera that is used
to collect the images dataset. Two wheels’ car is adopted as an object
for introducing indoor localization project. The obtained training accuracy
is 99.6% for training dataset and 100% for validating dataset.
Development of real-time indoor human tracking system using LoRa technology IJECEIAES
Industrial growth has increased the number of jobs hence increase thenumber of employees. Therefore, it is impossible to track the location of allemployees in the same building at the same time as they are placed in adifferent department. In this work, a real-time indoor human tracking systemis developed to determine the location of employees in a real-timeimplementation. In this work, the long-range (LoRa) technology is used asthe communication medium to establish the communication between thetracker and the gateway in the developed system due to its low power withhigh coverage range besides requires low cost for deployment. The receivedsignal strength indicator (RSSI) based positioning method is used to measurethe power level at the receiver which is the gateway to determine thelocation of the employees. Different scenarios have been considered toevaluate the performance of the developed system in terms of precision andreliability. This includes the size of the area, the number of obstacles in theconsidered area, and the height of the tracker and the gateway. A real-timetestbed implementation has been conducted to evaluate the performance ofthe developed system and the results show that the system has high precisionand are reliable for all considered scenarios.
Présentation noura baccar " Innovation on Indoor GeoLocalization Applications...Cynapsys It Hotspot
la présentation expose en premier lieu une vue d'ensemble des techniques et technologies de localisation munie d’une analyse comparative de l'état des lieux du marché des applications de géolocalisation indoor.
deuxièmement donne une description d'une approche novatrice pour la géolocalisation intérieure basée sur la Logique Floue avancée et l'application mobile résultante développée pour les locaux Cynapsys.
Fingerprint indoor positioning based on user orientations and minimum computa...TELKOMNIKA JOURNAL
Indoor Positioning System (IPS) has an important role in the field of Internet of Thing. IPS works
based on many existing radio frequency technologies. One of the most popular methods is WLAN
Fingerprint because this technology has been installed widely inside buildings and it provides a high level
of accuracy. The performance is affected by people who hold mobile devices (user) and also people
around the users. This research aimed to minimize the computation time of kNN searching process.
The results showed that when the value of k in kNN was greater, the computation time increased,
especially when using Cityblock and Minkowski distance function. The smallest average computation time
was 2.14 ms, when using Cityblock. Then the computational time for Euclidean and Chebychev were
relatively stable, i.e. 2.2 ms and 2.23 ms, respectively.
Ubiquitous Positioning: A Taxonomy for Location Determination on Mobile Navig...sipij
The location determination in obstructed area can be very challenging especially if Global Positioning System are blocked. Users will find it difficult to navigate directly on-site in such condition, especially indoor car park lot or obstructed environment. Sometimes, it needs to combine with other sensors and positioning methods in order to determine the location with more intelligent, reliable and ubiquity. By using ubiquitous positioning in mobile navigation system, it is a promising ubiquitous location technique in a mobile phone since as it is a familiar personal electronic device for many people. However, there is an increasing need for better development of proposed ubiquitous positioning systems. System developers are also lacking of good frameworks for understanding different options during building ubiquitous positioning systems. This paper proposes taxonomy to address both of these problems. The proposed taxonomy has been constructed from a literature study of papers and articles on positioning estimation that can be used to determine location everywhere on mobile navigation system. For researchers the taxonomy can also be used as an aid for scoping out future research in the area of ubiquitous positioning.
Partial half fine-tuning for object detection with unmanned aerial vehiclesIAESIJAI
Deep learning has shown outstanding performance in object detection tasks with unmanned aerial vehicles (UAVs), which involve the fine-tuning technique to improve performance by transferring features from pre-trained models to specific tasks. However, despite the immense popularity of fine-tuning, no works focused on to study of the precise fine-tuning effects of object detection tasks with UAVs. In this research, we conduct an experimental analysis of each existing fine-tuning strategy to answer which is the best procedure for transferring features with fine-tuning techniques. We also proposed a partial half fine-tuning strategy which we divided into two techniques: first half fine-tuning (First half F-T) and final half fine-tuning (Final half F-T). We use the VisDrone dataset for the training and validation process. Here we show that the partial half fine-tuning: Final half F-T can outperform other fine-tuning techniques and are also better than one of the state-of-the-art methods by a difference of 19.7% from the best results of previous studies.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Navigation for Indoor Location Based On QR Codes and Google Maps – A SurveyAM Publications,India
QR codes is simple and efficient tool used in smartphones to obtain accurate indoor user location. As an increasing number of geo-location services are exploiting the capabilities of smartphones, most of which incorporate GPS location. The smartphones compasses to direct the user to the destination. With the help of accelerometers, it will be possible to estimate the walking distances as an auxiliary information. An, idea in order to calculate the user position inside a building by using QR codes and Google maps. Navigation is the process or activity of accurately ascertaining one’s position, planning and following the route to goal location. Mobile phones are devices merely used to communicate. Based on new techniques like GPS and sensors, compass and accelerometers that can determine the orientation of devices, location based applications coupled with augmented reality views are possible.
Dynamic navigation indoor map using Wi-Fi fingerprinting mobile technologyjournalBEEI
This paper presents the exploitation of Wi-Fi signals sensors using fingerprinting method to capture the location and provide the possible navigation paths. Such approach is practical because current smartphones nowadays are equipped with inertial sensors that can capture the Wi-Fi signals from the Wi-Fi’s access points inside the building. From the comparative study conducted, the AnyPlace development tool is used for the development of dynamic navigation indoor map. Its components, namely; Architect, Viewer, Navigator and Logger are used for different specific functions. As a case study, we implement the proposed approach to guide user for navigation in Sunway Pyramid Shopping Mall, Malaysia as floor plan as well as using Google Maps as the base map for prove of concept. From the developer point of view, it is observed that the proposed approach is viable to create a dynamic navigation indoor map provided that the floor plans must be generated first. Such plan should be integrated with the SDK tool to work with the navigation APIs. It is hoped that the proposed work can be extended for more complex indoor map for better implementation.
iBeacon-based indoor positioning system: from theory to practical deployment IJECEIAES
Developing an indoor positioning system became essential when global positioning system signals could not work well in indoor environments. Mobile positioning can be accomplished via many radio frequency technology such as Bluetooth low energy (BLE), wireless fidelity (Wi-Fi), ultra-wideband (UWB), and so on. With the pressing need for indoor positioning systems, we, in this work, present a deployment scheme for smartphone using Bluetooth iBeacons. Three main parts, hardware deployment, software deployment, and positioning accuracy assessment, are discussed carefully to find the optimal solution for a complete indoor positioning system. Our application and experimental results show that proposed solution is feasible and indoor positioning system is completely attainable.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Convolutional neural network with binary moth flame optimization for emotion ...IAESIJAI
Electroencephalograph (EEG) signals have the ability of real-time reflecting brain activities. Utilizing the EEG signal for analyzing human emotional states is a common study. The EEG signals of the emotions aren’t distinctive and it is different from one person to another as every one of them has different emotional responses to same stimuli. Which is why, the signals of the EEG are subject dependent and proven to be effective for the subject dependent detection of the Emotions. For the purpose of achieving enhanced accuracy and high true positive rate, the suggested system proposed a binary moth flame optimization (BMFO) algorithm for the process of feature selection and convolutional neural networks (CNNs) for classifications. In this proposal, optimum features are chosen with the use of accuracy as objective function. Ultimately, optimally chosen features are classified after that with the use of a CNN for the purpose of discriminating different emotion states.
A novel ensemble model for detecting fake newsIAESIJAI
Due the growing proliferation of fake news over the past couple of years, our objective in this paper is to propose an ensemble model for the automatic classification of article news as being either real or fake. For this purpose, we opt for a blending technique that combines three models, namely bidirectional long short-term memory (Bi-LSTM), stochastic gradient descent classifier and ridge classifier. The implementation of the proposed model (i.e. BI-LSR) on real world datasets, has shown outstanding results. In fact, it achieved an accuracy score of 99.16%. Accordingly, this ensemble learning has proven to do perform better than individual conventional machine learning and deep learning models as well as many ensemble learning approaches cited in the literature.
More Related Content
Similar to Survey of indoor tracking systems using augmented reality
Software engineering model based smart indoor localization system using deep-...TELKOMNIKA JOURNAL
During the last few years, the allocation of objects or persons inside a specific
building is highly required. It is well known that the global positioning system
(GPS) cannot be adopted in indoor environment due to the lack of signals.
Therefore, it is important to discover a new way that works inside.
The proposed system uses the deep learning techniques to classify places based
on capturing images. The proposed system contains two parts: software part
and hardware part. The software part is built based on software engineering
model to increase the reliability, flexibility, and scalability. In addition,
this part, the dataset is collected using the Raspberry Pi III camera as training
and validating data set. This dataset is used as an input to the proposed deep
learning model. In the hardware part, Raspberry Pi III is used for loading
the proposed model and producing prediction results and a camera that is used
to collect the images dataset. Two wheels’ car is adopted as an object
for introducing indoor localization project. The obtained training accuracy
is 99.6% for training dataset and 100% for validating dataset.
Development of real-time indoor human tracking system using LoRa technology IJECEIAES
Industrial growth has increased the number of jobs hence increase thenumber of employees. Therefore, it is impossible to track the location of allemployees in the same building at the same time as they are placed in adifferent department. In this work, a real-time indoor human tracking systemis developed to determine the location of employees in a real-timeimplementation. In this work, the long-range (LoRa) technology is used asthe communication medium to establish the communication between thetracker and the gateway in the developed system due to its low power withhigh coverage range besides requires low cost for deployment. The receivedsignal strength indicator (RSSI) based positioning method is used to measurethe power level at the receiver which is the gateway to determine thelocation of the employees. Different scenarios have been considered toevaluate the performance of the developed system in terms of precision andreliability. This includes the size of the area, the number of obstacles in theconsidered area, and the height of the tracker and the gateway. A real-timetestbed implementation has been conducted to evaluate the performance ofthe developed system and the results show that the system has high precisionand are reliable for all considered scenarios.
Présentation noura baccar " Innovation on Indoor GeoLocalization Applications...Cynapsys It Hotspot
la présentation expose en premier lieu une vue d'ensemble des techniques et technologies de localisation munie d’une analyse comparative de l'état des lieux du marché des applications de géolocalisation indoor.
deuxièmement donne une description d'une approche novatrice pour la géolocalisation intérieure basée sur la Logique Floue avancée et l'application mobile résultante développée pour les locaux Cynapsys.
Fingerprint indoor positioning based on user orientations and minimum computa...TELKOMNIKA JOURNAL
Indoor Positioning System (IPS) has an important role in the field of Internet of Thing. IPS works
based on many existing radio frequency technologies. One of the most popular methods is WLAN
Fingerprint because this technology has been installed widely inside buildings and it provides a high level
of accuracy. The performance is affected by people who hold mobile devices (user) and also people
around the users. This research aimed to minimize the computation time of kNN searching process.
The results showed that when the value of k in kNN was greater, the computation time increased,
especially when using Cityblock and Minkowski distance function. The smallest average computation time
was 2.14 ms, when using Cityblock. Then the computational time for Euclidean and Chebychev were
relatively stable, i.e. 2.2 ms and 2.23 ms, respectively.
Ubiquitous Positioning: A Taxonomy for Location Determination on Mobile Navig...sipij
The location determination in obstructed area can be very challenging especially if Global Positioning System are blocked. Users will find it difficult to navigate directly on-site in such condition, especially indoor car park lot or obstructed environment. Sometimes, it needs to combine with other sensors and positioning methods in order to determine the location with more intelligent, reliable and ubiquity. By using ubiquitous positioning in mobile navigation system, it is a promising ubiquitous location technique in a mobile phone since as it is a familiar personal electronic device for many people. However, there is an increasing need for better development of proposed ubiquitous positioning systems. System developers are also lacking of good frameworks for understanding different options during building ubiquitous positioning systems. This paper proposes taxonomy to address both of these problems. The proposed taxonomy has been constructed from a literature study of papers and articles on positioning estimation that can be used to determine location everywhere on mobile navigation system. For researchers the taxonomy can also be used as an aid for scoping out future research in the area of ubiquitous positioning.
Partial half fine-tuning for object detection with unmanned aerial vehiclesIAESIJAI
Deep learning has shown outstanding performance in object detection tasks with unmanned aerial vehicles (UAVs), which involve the fine-tuning technique to improve performance by transferring features from pre-trained models to specific tasks. However, despite the immense popularity of fine-tuning, no works focused on to study of the precise fine-tuning effects of object detection tasks with UAVs. In this research, we conduct an experimental analysis of each existing fine-tuning strategy to answer which is the best procedure for transferring features with fine-tuning techniques. We also proposed a partial half fine-tuning strategy which we divided into two techniques: first half fine-tuning (First half F-T) and final half fine-tuning (Final half F-T). We use the VisDrone dataset for the training and validation process. Here we show that the partial half fine-tuning: Final half F-T can outperform other fine-tuning techniques and are also better than one of the state-of-the-art methods by a difference of 19.7% from the best results of previous studies.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Navigation for Indoor Location Based On QR Codes and Google Maps – A SurveyAM Publications,India
QR codes is simple and efficient tool used in smartphones to obtain accurate indoor user location. As an increasing number of geo-location services are exploiting the capabilities of smartphones, most of which incorporate GPS location. The smartphones compasses to direct the user to the destination. With the help of accelerometers, it will be possible to estimate the walking distances as an auxiliary information. An, idea in order to calculate the user position inside a building by using QR codes and Google maps. Navigation is the process or activity of accurately ascertaining one’s position, planning and following the route to goal location. Mobile phones are devices merely used to communicate. Based on new techniques like GPS and sensors, compass and accelerometers that can determine the orientation of devices, location based applications coupled with augmented reality views are possible.
Dynamic navigation indoor map using Wi-Fi fingerprinting mobile technologyjournalBEEI
This paper presents the exploitation of Wi-Fi signals sensors using fingerprinting method to capture the location and provide the possible navigation paths. Such approach is practical because current smartphones nowadays are equipped with inertial sensors that can capture the Wi-Fi signals from the Wi-Fi’s access points inside the building. From the comparative study conducted, the AnyPlace development tool is used for the development of dynamic navigation indoor map. Its components, namely; Architect, Viewer, Navigator and Logger are used for different specific functions. As a case study, we implement the proposed approach to guide user for navigation in Sunway Pyramid Shopping Mall, Malaysia as floor plan as well as using Google Maps as the base map for prove of concept. From the developer point of view, it is observed that the proposed approach is viable to create a dynamic navigation indoor map provided that the floor plans must be generated first. Such plan should be integrated with the SDK tool to work with the navigation APIs. It is hoped that the proposed work can be extended for more complex indoor map for better implementation.
iBeacon-based indoor positioning system: from theory to practical deployment IJECEIAES
Developing an indoor positioning system became essential when global positioning system signals could not work well in indoor environments. Mobile positioning can be accomplished via many radio frequency technology such as Bluetooth low energy (BLE), wireless fidelity (Wi-Fi), ultra-wideband (UWB), and so on. With the pressing need for indoor positioning systems, we, in this work, present a deployment scheme for smartphone using Bluetooth iBeacons. Three main parts, hardware deployment, software deployment, and positioning accuracy assessment, are discussed carefully to find the optimal solution for a complete indoor positioning system. Our application and experimental results show that proposed solution is feasible and indoor positioning system is completely attainable.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Convolutional neural network with binary moth flame optimization for emotion ...IAESIJAI
Electroencephalograph (EEG) signals have the ability of real-time reflecting brain activities. Utilizing the EEG signal for analyzing human emotional states is a common study. The EEG signals of the emotions aren’t distinctive and it is different from one person to another as every one of them has different emotional responses to same stimuli. Which is why, the signals of the EEG are subject dependent and proven to be effective for the subject dependent detection of the Emotions. For the purpose of achieving enhanced accuracy and high true positive rate, the suggested system proposed a binary moth flame optimization (BMFO) algorithm for the process of feature selection and convolutional neural networks (CNNs) for classifications. In this proposal, optimum features are chosen with the use of accuracy as objective function. Ultimately, optimally chosen features are classified after that with the use of a CNN for the purpose of discriminating different emotion states.
A novel ensemble model for detecting fake newsIAESIJAI
Due the growing proliferation of fake news over the past couple of years, our objective in this paper is to propose an ensemble model for the automatic classification of article news as being either real or fake. For this purpose, we opt for a blending technique that combines three models, namely bidirectional long short-term memory (Bi-LSTM), stochastic gradient descent classifier and ridge classifier. The implementation of the proposed model (i.e. BI-LSR) on real world datasets, has shown outstanding results. In fact, it achieved an accuracy score of 99.16%. Accordingly, this ensemble learning has proven to do perform better than individual conventional machine learning and deep learning models as well as many ensemble learning approaches cited in the literature.
K-centroid convergence clustering identification in one-label per type for di...IAESIJAI
Disease prediction is a high demand field which requires significant support from machine learning (ML) to enhance the result efficiency. The research works on application of K-means clustering supervised classification in disease prediction where each class only has one labeled data. The K-centroid convergence clustering identification (KC3 I) system is based on semi-K-means clustering but only requires single labeled data per class for the training process with the training dataset to update the centroid. The KC3 I model also includes a dictionary box to index all the input centroids before and after the updating process. Each centroid matches with a corresponding label inside this box. After the training process, each time the input features arrive, the trained centroid will put them to its cluster depending on the Euclidean distance, then convert them into the specific class name, which is coherent to that centroid index. Two validation stages were carried out and accomplished the expectation in terms of precision, recall, F1-score, and absolute accuracy. The last part demonstrates the possibility of feature reduction by selecting the most crucial feature with the extra tree classifier method. Total data are fed into the KC3 I system with the most important features and remain the same accuracy.
Plant leaf detection through machine learning based image classification appr...IAESIJAI
Since maize is a staple diet for people, especially vegetarians and vegans, maize leaf disease has a significant influence here on the food industry including maize crop productivity. Therefore, it should be understood that maize quality must be optimal; yet, to do so, maize must be safeguarded from several illnesses. As a result, there is a great demand for such an automated system that can identify the condition early on and take the appropriate action. Early disease identification is crucial, but it also poses a major obstacle. As a result, in this research project, we adopt the fundamental k-nearest neighbor (KNN) model and concentrate on building and developing the enhanced k-nearest neighbor (EKNN) model. EKNN aids in identifying several classes of disease. To gather discriminative, boundary, pattern, and structurally linked information, additional high-quality fine and coarse features are generated. This information is then used in the classification process. The classification algorithm offers high-quality gradient-based features. Additionally, the proposed model is assessed using the Plant-Village dataset, and a comparison with many standard classification models using various metrics is also done.
Backbone search for object detection for applications in intrusion warning sy...IAESIJAI
In this work, we propose a novel backbone search method for object detection for applications in intrusion warning systems. The goal is to find a compact model for use in embedded thermal imaging cameras widely used in intrusion warning systems. The proposed method is based on faster region-based convolutional neural network (Faster R-CNN) because it can detect small objects. Inspired by EfficientNet, the sought-after backbone architecture is obtained by finding the most suitable width scale for the base backbone (ResNet50). The evaluation metrics are mean average precision (mAP), number of parameters, and number of multiply–accumulate operations (MACs). The experimental results showed that the proposed method is effective in building a lightweight neural network for the task of object detection. The obtained model can keep the predefined mAP while minimizing the number of parameters and computational resources. All experiments are executed elaborately on the person detection in intrusion warning systems (PDIWS) dataset.
Deep learning method for lung cancer identification and classificationIAESIJAI
Lung cancer (LC) is calming many lives and is becoming a serious cause of concern. The detection of LC at an early stage assists the chances of recovery. Accuracy of detection of LC at an early stage can be improved with the help of a convolutional neural network (CNN) based deep learning approach. In this paper, we present two methodologies for Lung cancer detection (LCD) applied on Lung image database consortium (LIDC) and image database resource initiative (IDRI) data sets. Classification of these LC images is carried out using support vector machine (SVM), and deep CNN. The CNN is trained with i) multiple batches and ii) single batch for LC image classification as non cancer and cancer image. All these methods are being implemented in MATLAB. The accuracy of classification obtained by SVM is 65%, whereas deep CNN produced detection accuracy of 80% and 100% respectively for multiple and single batch training. The novelty of our experimentation is near 100% classification accuracy obtained by our deep CNN model when tested on 25 Lung computed tomography (CT) test images each of size 512×512 pixels in less than 20 iterations as compared to the research work carried out by other researchers using cropped LC nodule images.
Optically processed Kannada script realization with Siamese neural network modelIAESIJAI
Optical character recognition (OCR) is a technology that allows computers to recognize and extract text from images or scanned documents. It is commonly used to convert printed or handwritten text into machine-readable format. This Study presents an OCR system on Kannada Characters based on siamese neural network (SNN). Here the SNN, a Deep neural network which comprises of two identical convolutional neural network (CNN) compare the script and ranks based on the dissimilarity. When lesser dissimilarity score is identified, prediction is done as character match. In this work the authors use 5 classes of Kannada characters which were initially preprocessed using grey scaling and convert it to pgm format. This is directly input into the Deep convolutional network which is learnt from matching and non-matching image between the CNN with contrastive loss function in Siamese architecture. The Proposed OCR system uses very less time and gives more accurate results as compared to the regular CNN. The model can become a powerful tool for identification, particularly in situations where there is a high degree of variation in writing styles or limited training data is available.
Embedded artificial intelligence system using deep learning and raspberrypi f...IAESIJAI
Melanoma is a kind of skin cancer that originates in melanocytes responsible for producing melanin, it can be a severe and potentially deadly form of cancer because it can metastasize to other regions of the body if not detected and treated early. To facilitate this process, Recently, various computer-assisted low-cost, reliable, and accurate diagnostic systems have been proposed based on artificial intelligence (AI) algorithms, particularly deep learning techniques. This work proposed an innovative and intelligent system that combines the internet of things (IoT) with a Raspberry Pi connected to a camera and a deep learning model based on the deep convolutional neural network (CNN) algorithm for real-time detection and classification of melanoma cancer lesions. The key stages of our model before serializing to the Raspberry Pi: Firstly, the preprocessing part contains data cleaning, data transformation (normalization), and data augmentation to reduce overfitting when training. Then, the deep CNN algorithm is used to extract the features part. Finally, the classification part with applied Sigmoid Activation Function. The experimental results indicate the efficiency of our proposed classification system as we achieved an accuracy rate of 92%, a precision of 91%, a sensitivity of 91%, and an area under the curve- receiver operating characteristics (AUC-ROC) of 0.9133.
Deep learning based biometric authentication using electrocardiogram and irisIAESIJAI
Authentication systems play an important role in wide range of applications. The traditional token certificate and password-based authentication systems are now replaced by biometric authentication systems. Generally, these authentication systems are based on the data obtained from face, iris, electrocardiogram (ECG), fingerprint and palm print. But these types of models are unimodal authentication, which suffer from accuracy and reliability issues. In this regard, multimodal biometric authentication systems have gained huge attention to develop the robust authentication systems. Moreover, the current development in deep learning schemes have proliferated to develop more robust architecture to overcome the issues of tradition machine learning based authentication systems. In this work, we have adopted ECG and iris data and trained the obtained features with the help of hybrid convolutional neural network- long short-term memory (CNN-LSTM) model. In ECG, R peak detection is considered as an important aspect for feature extraction and morphological features are extracted. Similarly, gabor-wavelet, gray level co-occurrence matrix (GLCM), gray level difference matrix (GLDM) and principal component analysis (PCA) based feature extraction methods are applied on iris data. The final feature vector is obtained from MIT-BIH and IIT Delhi Iris dataset which is trained and tested by using CNN-LSTM. The experimental analysis shows that the proposed approach achieves average accuracy, precision, and F1-core as 0.985, 0.962 and 0.975, respectively.
Hybrid channel and spatial attention-UNet for skin lesion segmentationIAESIJAI
Melanoma is a type of skin cancer which has affected many lives globally. The American Cancer Society research has suggested that it a serious type of skin cancer and lead to mortality but it is almost 100% curable if it is detected and treated in its early stages. Currently automated computer vision-based schemes are widely adopted but these systems suffer from poor segmentation accuracy. To overcome these issue, deep learning (DL) has become the promising solution which performs extensive training for pattern learning and provide better classification accuracy. However, skin lesion segmentation is affected due to skin hair, unclear boundaries, pigmentation, and mole. To overcome this issue, we adopt UNet based deep learning scheme and incorporated attention mechanism which considers low level statistics and high-level statistics combined with feedback and skip connection module. This helps to obtain the robust features without neglecting the channel information. Further, we use channel attention, spatial attention modulation to achieve the final segmentation. The proposed DL based scheme is instigated on publically available dataset and experimental investigation shows that the proposed Hybrid Attention UNet approach achieves average performance as 0.9715, 0.9962, 0.9710.
Photoplethysmogram signal reconstruction through integrated compression sensi...IAESIJAI
The transmission of photoplethysmogram (PPG) signals in real-time is extremely challenging and facilitates the use of an internet of things (IoT) environment for healthcare- monitoring. This paper proposes an approach for PPG signal reconstruction through integrated compression sensing and basis function aware shallow learning (CSBSL). Integrated-CSBSL approach for combined compression of PPG signals via multiple channels thereby improving the reconstruction accuracy for the PPG signals essential in healthcare monitoring. An optimal basis function aware shallow learning procedure is employed on PPG signals with prior initialization; this is further fine-tuned by utilizing the knowledge of various other channels, which exploit the further sparsity of the PPG signals. The proposed method for learning combined with PPG signals retains the knowledge of spatial and temporal correlation. The proposed Integrated-CSBSL approach consists of two steps, in the first step the shallow learning based on basis function is carried out through training the PPG signals. The proposed method is evaluated using multichannel PPG signal reconstruction, which potentially benefits clinical applications through PPG monitoring and diagnosis.
Speaker identification under noisy conditions using hybrid convolutional neur...IAESIJAI
Speaker identification is biometrics that classifies or identifies a person from other speakers based on speech characteristics. Recently, deep learning models outperformed conventional machine learning models in speaker identification. Spectrograms of the speech have been used as input in deep learning-based speaker identification using clean speech. However, the performance of speaker identification systems gets degraded under noisy conditions. Cochleograms have shown better results than spectrograms in deep learning-based speaker recognition under noisy and mismatched conditions. Moreover, hybrid convolutional neural network (CNN) and recurrent neural network (RNN) variants have shown better performance than CNN or RNN variants in recent studies. However, there is no attempt conducted to use a hybrid CNN and enhanced RNN variants in speaker identification using cochleogram input to enhance the performance under noisy and mismatched conditions. In this study, a speaker identification using hybrid CNN and the gated recurrent unit (GRU) is proposed for noisy conditions using cochleogram input. VoxCeleb1 audio dataset with real-world noises, white Gaussian noises (WGN) and without additive noises were employed for experiments. The experiment results and the comparison with existing works show that the proposed model performs better than other models in this study and existing works.
Multi-channel microseismic signals classification with convolutional neural n...IAESIJAI
Identifying and classifying microseismic signals is essential to warn of mines’ dangers. Deep learning has replaced traditional methods, but labor-intensive manual identification and varying deep learning outcomes pose challenges. This paper proposes a transfer learning-based convolutional neural network (CNN) method called microseismic signals-convolutional neural network (MS-CNN) to automatically recognize and classify microseismic events and blasts. The model was instructed on a limited sample of data to obtain an optimal weight model for microseismic waveform recognition and classification. A comparative analysis was performed with an existing CNN model and classical image classification models such as AlexNet, GoogLeNet, and ResNet50. The outcomes demonstrate that the MS-CNN model achieved the best recognition and classification effect (99.6% accuracy) in the shortest time (0.31 s to identify 277 images in the test set). Thus, the MS-CNN model can efficiently recognize and classify microseismic events and blasts in practical engineering applications, improving the recognition timeliness of microseismic signals and further enhancing the accuracy of event classification.
Sophisticated face mask dataset: a novel dataset for effective coronavirus di...IAESIJAI
Efficient and accurate coronavirus disease (COVID-19) surveillance necessitates robust identification of individuals wearing face masks. This research introduces the sophisticated face mask dataset (SFMD), a comprehensive compilation of high-quality face mask images enriched with detailed annotations on mask types, fits, and usage patterns. Leveraging cutting-edge deep learning models—EfficientNet-B2, ResNet50, and MobileNet-V2—, we compare SFMD against two established benchmarks: the real-world masked face dataset (RMFD) and the masked face recognition dataset (MFRD). Across all models, SFMD consistently outperforms RMFD and MFRD in key metrics, including accuracy, precision, recall, and F1 score. Additionally, our study demonstrates the dataset's capability to cultivate robust models resilient to intricate scenarios like low-light conditions and facial occlusions due to accessories or facial hair.
Transfer learning for epilepsy detection using spectrogram imagesIAESIJAI
Epilepsy stands out as one of the common neurological diseases. The neural activity of the brain is observed using electroencephalography (EEG). Manual inspection of EEG brain signals is a slow and arduous process, which puts heavy load on neurologists and affects their performance. The aim of this study is to find the best result of classification using the transfer learning model that automatically identify the epileptic and the normal activity, to classify EEG signals by using images of spectrogram which represents the percentage of energy for each coefficient of the continuous wavelet. Dataset includes the EEG signals recorded at monitoring unit of epilepsy used in this study to presents an application of transfer learning by comparing three models Alexnet, visual geometry group (VGG19) and residual neural network ResNet using different combinations with seven different classifiers. This study tested the models and reached a different value of accuracy and other metrics used to judge their performances, and as a result the best combination has been achieved with ResNet combined with support vector machine (SVM) classifier that classified EEG signals with a high success rate using multiple performance metrics such as 97.22% accuracy and 2.78% the value of the error rate.
Deep neural network for lateral control of self-driving cars in urban environ...IAESIJAI
The exponential growth of the automotive industry clearly indicates that self-driving cars are the future of transportation. However, their biggest challenge lies in lateral control, particularly in urban bottlenecking environments, where disturbances and obstacles are abundant. In these situations, the ego vehicle has to follow its own trajectory while rapidly correcting deviation errors without colliding with other nearby vehicles. Various research efforts have focused on developing lateral control approaches, but these methods remain limited in terms of response speed and control accuracy. This paper presents a control strategy using a deep neural network (DNN) controller to effectively keep the car on the centerline of its trajectory and adapt to disturbances arising from deviations or trajectory curvature. The controller focuses on minimizing deviation errors. The Matlab/Simulink software is used for designing and training the DNN. Finally, simulation results confirm that the suggested controller has several advantages in terms of precision, with lateral deviation remaining below 0.65 meters, and rapidity, with a response time of 0.7 seconds, compared to traditional controllers in solving lateral control.
Attention mechanism-based model for cardiomegaly recognition in chest X-Ray i...IAESIJAI
Recently, cardiovascular diseases (CVDs) have become a rapidly growing problem in the world, especially in developing countries. The latter are facing a lifestyle change that introduces new risk factors for heart disease, that requires a particular and urgent interest. Besides, cardiomegaly is a sign of cardiovascular diseases that refers to various conditions; it is associated with the heart enlargement that can be either transient or permanent depending on certain conditions. Furthermore, cardiomegaly is visible on any imaging test including Chest X-Radiation (X-Ray) images; which are one of the most common tools used by Cardiologists to detect and diagnose many diseases. In this paper, we propose an innovative deep learning (DL) model based on an attention module and MobileNet architecture to recognize Cardiomegaly patients using the popular Chest X-Ray8 dataset. Actually, the attention module captures the spatial relationship between the relevant regions in Chest X-Ray images. The experimental results show that the proposed model achieved interesting results with an accuracy rate of 81% which makes it suitable for detecting cardiomegaly disease.
Efficient commodity price forecasting using long short-term memory modelIAESIJAI
Predicting commodity prices, particularly food prices, is a significant concern for various stakeholders, especially in regions that are highly sensitive to commodity price volatility. Historically, many machine learning models like autoregressive integrated moving average (ARIMA) and support vector machine (SVM) have been suggested to overcome the forecasting task. These models struggle to capture the multifaceted and dynamic factors influencing these prices. Recently, deep learning approaches have demonstrated considerable promise in handling complex forecasting tasks. This paper presents a novel long short-term memory (LSTM) network-based model for commodity price forecasting. The model uses five essential commodities namely bread, meat, milk, oil, and petrol. The proposed model focuses on advanced feature engineering which involves moving averages, price volatility, and past prices. The results reveal that our model outperforms traditional methods as it achieves 0.14, 3.04%, and 98.2% for root mean square error (RMSE), mean absolute percentage error (MAPE), and R-squared (R2 ), respectively. In addition to the simplicity of the model, which consists of an LSTM single-cell architecture that reduced the training time to a few minutes instead of hours. This paper contributes to the economic literature on price prediction using advanced deep learning techniques as well as provides practical implications for managing commodity price instability globally.
1-dimensional convolutional neural networks for predicting sudden cardiacIAESIJAI
Sudden cardiac arrest (SCA) is a serious heart problem that occurs without symptoms or warning. SCA causes high mortality. Therefore, it is important to estimate the incidence of SCA. Current methods for predicting ventricular fibrillation (VF) episodes require monitoring patients over time, resulting in no complications. New technologies, especially machine learning, are gaining popularity due to the benefits they provide. However, most existing systems rely on manual processes, which can lead to inefficiencies in disseminating patient information. On the other hand, existing deep learning methods rely on large data sets that are not publicly available. In this study, we propose a deep learning method based on one-dimensional convolutional neural networks to learn to use discrete fourier transform (DFT) features in raw electrocardiogram (ECG) signals. The results showed that our method was able to accurately predict the onset of SCA with an accuracy of 96% approximately 90 minutes before it occurred. Predictions can save many lives. That is, optimized deep learning models can outperform manual models in analyzing long-term signals.
A deep learning-based approach for early detection of disease in sugarcane pl...IAESIJAI
In many regions of the nation, agriculture serves as the primary industry. The farming environment now faces a number of challenges to farmers. One of the major concerns, and the focus of this research, is disease prediction. A methodology is suggested to automate a process for identifying disease in plant growth and warning farmers in advance so they can take appropriate action. Disease in crop plants has an impact on agricultural production. In this work, a novel DenseNet-support vector machine: explainable artificial intelligence (DNet-SVM: XAI) interpretation that combines a DenseNet with support vector machine (SVM) and local interpretable model-agnostic explanation (LIME) interpretation has been proposed. DNet-SVM: XAI was created by a series of modifications to DenseNet201, including the addition of a support vector machine (SVM) classifier. Prior to using SVM to identify if an image is healthy or un-healthy, images are first feature extracted using a convolution network called DenseNet. In addition to offering a likely explanation for the prediction, the reasoning is carried out utilizing the visual cue produced by the LIME. In light of this, the proposed approach, when paired with its determined interpretability and precision, may successfully assist farmers in the detection of infected plants and recommendation of pesticide for the identified disease.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Survey of indoor tracking systems using augmented reality
1. IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 12, No. 1, March 2023, pp. 402~414
ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i1.pp402-414 402
Journal homepage: http://ijai.iaescore.com
Survey of indoor tracking systems using augmented reality
Ashraf Saad Shewail1
, Neven Abdelaziz Elsayed1
, Hala Helmy Zayed1,2
1
Faculty of Computers and Artificial Intelligence, Benha University, Egypt
2
Faculty of Information Technology and Computer Science, Nile University, Egypt
Article Info ABSTRACT
Article history:
Received Feb 4, 2022
Revised Aug 7, 2022
Accepted Sep 5, 2022
Augmented reality overlays virtual content on the physical world, displaying
location-based information more efficiently. Tracking is a trace detail of the
location recorded, either by taking a reading based on a set time interval, set
distance, and change in direction by more than a certain angle, or a
combination of these. Tracking is divided into two types, outdoor tracking,
and indoor tracking. Augmented reality added value and information to
tracking applications, whether indoor or outdoor. Recently, outdoor tracking
by the global positioning system (GPS) became an essential component in
navigation applications. However, indoor tracking is still challenging in the
augmented reality field. Most augmented reality indoor tracking use optical
and sensor-based tracking, such as markers, beacons, Raspberry Pi, and
route planner modules for the building. With the technology enhancement,
indoor tracking started to rely on Wi-Fi, Bluetooth, geographic information
system, radiofrequency identification, and sensor chip technologies. Our
paper describes the recent augmented reality tracking techniques from 2011
to 2021. The paper compares image detection algorithms with various
communication technologies, disscusing the advantages and the limitations
of each technology.
Keywords:
Augmented reality
Indoor tracking
Route planner module
Simultaneous localization and
mapping
This is an open access article under the CC BY-SA license.
Corresponding Author:
Ashraf Saad Shewail
Department of Computer Science, Faculty of Computers and Artifical Intelligence, Benha University
Fareed Nada Street, Banha, Al Qalyubia 13511, Egypt
Email: ashraf.saad@fci.bu.edu.eg
1. INTRODUCTION
Tracking applications assist users in detecting their location and aid in navigating to destinations [1].
According to a recent study performed on mobile phone users who use navigation, approximately 95% of
users who own a smartphone device used a mapping application at least once, which means that maps are in
daily use for most smartphone users [2]. Guiding users to particular locations in indoor environments is a
tough, challenging task. Recently, the systems for indoor areas are greatly developed. To take advantage of
indoor tracking technology in various places and fields, it must be quick to determine the current location and
destination. Indoor tracking must be and reliable in reaching the specified goal, allow it to expand according
to the development of the place, and be independent according to the area used in it [3].
It is still simple to get lost indoors, where the global positioning system (GPS) satellite signals are
not precisely detectable for navigation applications. GPS will present the identical position; however the
person is on various floors [4]. The systems in this field have been enhanced and became more accurate and
can now determine users in real-time [5]. People spend most of their time indoors [6]. These improvements
have allowed leveraging location systems in several fields. Guidance systems studies can found applied in
monitoring such in faculties, museums, and art galleries [7], [8], medicine [9]–[11], robots [12], [13],
education [14], and navigation [15].
2. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
403
These days, the world uses hybrid technologies to use indoor and outdoor tracking in a single
system. Some recent researchers are trying to eliminate reliance on predefined maps for indoor tracking and
find alternative solutions. In addition to solving the problem of low lighting intensity and application in
places with multiple floors [16].
In this paper, we discuss indoor tracking limitations and challenges. Limitations include light
intensity, environment complexity, and multiple floors. Algorithms used in outdoor tracking are different
from indoor tracking. Most indoor tracking depends on predefined maps leads to more costs in map-making.
The paper also introduces comparative studies between various communication technologies and image
detection algorithms used in tracking systems.
2. METHOD
In this survey, we restrict the searches to include studies that use various ways for building indoor
tracking systems. We are interested in including the studies that used various positioning technologies. Also,
we included studies that used different feature detection algorithms. We selected these studies of diverse
settings to solve the problem of getting lost indoors, where GPS satellite signals are not precisely detectable
for indoor navigation systems. A broad literature search of IEEE Xplore, Scopus, Science Direct, Egyptian
Knowledge Bank (EKB) and Google Scholar. We include all the retrieved studies that combined augmented
reality, positioning techniques, and image processing algorithms. In addition, the reference lists of all
retrieved papers were reviewed to determine other relevant articles. The studies in this survey are organized
into the following three groups: active tracking [17]–[19], passive tracking [20]–[22], and hybrid tracking
[5], [23], [24].
The taxonomy of indoor tracking systems denotes the localization of persons and objects within
buildings. This indoor localization is thus a technical challenge because GPS does not work reliably within
interior spaces [25]. Most indoor tracking systems use communication technologies or images based on
detection algorithms. Figure 1 shows the techniques used and algorithms in each method.
Figure 1. Indoor tracking systems taxonomy
The communications positioning technologies used in our smartphone, known as a GPS or global
navigation satellite system (GNSS) [26], do not work effectively inside the home or in the areas covered by
trees or surrounded by concrete buildings. Presently, to achieve indoor positioning are using wireless
technologies such as wireless local area network (WLAN) [27], Bluetooth [28], near field communication
(NFC) [29], Ultrasound [30], Infrared [31], and radio frequency identification (RFID) [32]. A comparison
among the communication technologies is shown in Table 1.
Each indoor position technology has its own parameters such as in Table 1. So, the technology's
selection will finally be based on accuracy, cost, precision, ease of handling, scalability, and Power
3. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
404
consumption. For indoor positioning, the best solution is the fusion of geographic information systems (GIS)
features and Wi-Fi technologies to get the best of both worlds [18].
Table 1. Comparison between various communication technologies
Technology Coverage Power consumption Accuracy Cost
GPS Outdoor Very High 6–10 m High
Wi-Fi (outdoor/indoor) High 1–5 m Low
Bluetooth Indoor Low 2-5 m High
RFID Indoor Low 1-2 m Low
Ultra-wide band (UBW) (outdoor/indoor) Low 5-30 cm High
Infrared Indoor Low 1-2 m Medium
ZigBee Indoor Low 3–5 m Low
Cellular (outdoor/indoor) Low 50m-150m High
Visible light communication (VLC) Indoor Low 4-10 cm Low
NFC Indoor Low 4 cm Low
Frequency modulation (FM) Indoor Low 2–4 m Low
Ultrasound Indoor Low 3cm–1m Medium
Also, we can use image-based detection algorithms to apply indoor tracking systems. The initial
processing operation in computer vision is feature detection that extracts the interest points needed for the
next processing steps [33]. An interesting point in an image should be pure and well-built under disturbances
in the image region [34]. The scale-invariant feature transform (SIFT) is the most used image feature
extractor. The SIFT technique is scale-invariant and rotation-invariant. Then it tests each pixel in the image
with its eight neighbors and nine pixels in the scale around it [35]. The SIFT algorithm was slow, and
advanced applications required a faster version. The speeded-up robust features (SURF) algorithm is
dependent on the principles as the SIFT, but with some approximations to execute the method much faster
[36]. Similar to the SIFT, this technique is scale-invariant and rotation-invariant. The features from the
accelerated segment test (FAST) algorithm have a great advantage in that it is faster than many other popular
image detection extractor methods. In the FAST method, if a pixel is significantly distinct from neighboring
pixels, then this pixel is a corner point [37]. To verify the performance of the feature extraction module,
Feature points of indoor environment images for FAST, SURF, SIFT, and FAST-SURF are tested,
respectively. As the results are shown in Table 2 [23]. The oriented FAST and rotated BRIEF (ORB) detector
[11] combines FAST keypoint detectors with specified binary robust independent elementary features
(BRIEF) descriptions. It is a no-cost alternative to SIFT and SURF that outperforms them in computation
time and performance.
Table 2. Comparison between feature detection algorithms
Algorithm Number of feature extraction Time of feature extraction (ms) Real time
FAST 221 20 No
SURF 21 22 No
SIFT 7 20 Yes
FAST-SURF 26 20 Yes
ORB-SLAM 77 20 Yes
The ORB-simultaneous localization and mapping (ORB-SLAM) SLAM algorithm can better suit
the needs of mobile augmented reality (AR) systems, such as character detection speed, rotation invariance,
and radiation invariance; they can apply it in real-time. As shown in Table 2, each of the FAST, SURF and
FAST-SURF algorithms sometimes has the best feature detection numbers but can't be applied in real-time.
The ORB-SLAM algorithm is very strong robust and can be applied in real-time.
During the past ten years, multiple researchers sought to apply AR technology to the tour guidance
system to increase users' motivation and knowledge. The GPS couldn't be applied to indoor buildings; an
alternative method to apply indoor guidance should be researched. Researchers have tended to use different
techniques to create an indoor tracking system. In this paper, most methods are divided into three categories:
passive tracking, active tracking, and hybrid tracking [38].
2.1. Passive tracking
At the beginning of solving indoor tracking problems using passive tracking techniques [20], built a
system based on laptops and universal serial bus (USB) webcams that capture the live view frames. This
4. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
405
paper uses ARtoolkit to detect a marker and compare it with trained markers saved within the database as
binary data. When the marker at one accurate location is detected, it will be transformed into the location ID
for processing. The route planner module calculates the link between the current location and the target.
Open graphics library (OpenGL) application programming interface (API) is applied to load the virtual
reality modeling language (VRML) model based on the camera’s coordinates through the marker. Then an
evolution in the research field to improve indoor tracking system occurrence. The enhanced system
comprises a web camera, Raspberry Pi display glasses, and an input gadget. All devices associated with the
Raspberry Pi have different capacities. The client would begin executing the program by entering the goal
point, after which the camcorder would begin execution for catching live pictures for identifying area
markers [21]. After area markers are distinguished and perceived, the present area marker data is taken care
of into the route planner algorithm to decide the virtual item shown on the marker, which depends on the
bearing to be taken to the following marker area or last goal. According to Yadav et al. [22], introduce
technological development, increasing ease of use and reducing cost. In this paper, a laptop or a Raspberry Pi
has been replaced by a smartphone with a rear camera. Figure 2 shows the proposed system in [39].
Figure 2. Indoor tracking system workflow
Due to the difficulty of use and the high cost of hardware devices, most researchers have turned to
mobile phone image detection. Most of the research in this category relies on pre-defined maps for indoor
tracking. The system is based on marker-based tracking and mapping techniques. SLAM algorithm provides
computer graphics (signs and directions) to map and renew two dimensions plan simultaneously to the
camera scene [40]. In Al Delail et al. [41], they merge augmented reality with the SLAM algorithm. The AR
layer notifies the user of a nearby point of interest by image marker recognition. Overloading self-
explanatory three dimensions virtual objects associated with the location on the real-time video capture
provided by the Vuforia software development kit (SDK) it is portable and entirely configurable. It allows
object data to be downloadable from the cloud.
In the same year, another researcher [42], concluded that fully detailed two-dimensional (2D) or 3D
maps were unnecessary enough use of an accurate indoor positioning method that uses fiducial markers
system. This system uses continuous localization to tell users of their current position at all times. The
discrete localization is required to discover and detect some markers initially. The user chooses a proper
target from the menu and is instantly carried to the viewfinder screen. Every time a target is detected
correctly with the Vuforia SDK in the camera frame, the Dijkstra algorithm [43], recalculates the path and
gives directions to the user. The parallel tracking and mapping (PTAM) algorithm can create and expand a
map while following the camera pose in an unknown environment, for augmented reality requires no markers
pre-made maps [44].
ORB-SLAM [45], appeared to expand the versatility of PTAM to environments that are intractable
for that system. The true objective of a SLAM system is to create a map that can be used to give accurate
localization in the future. Visual SLAM's objective is to make use of the sensors. ORB-SLAM was designed
5. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
406
from scratch with a new monocular SLAM system with some new ideas and algorithms but also
incorporating excellent works developed in the past few years, such as the loop detection, the optimization
framework g2o [46], and ORB features [47]. ORB-SLAM processes in real-time in various locations, both
large and small, indoor and outdoor. The system is resistant to severe motion clutter supports broad baseline
loop closure and relocalization. Utilizes the same properties as other SLAM algorithms: tracking, mapping,
relocalization, and loop closure. ORB-SLAM outperforms other state-of-the-art monocular SLAM
techniques. ORB-SLAM3 [48], is the first real-time SLAM framework that supports visual, visual-inertial,
and multi-map SLAM with monocular, stereo, and red green blue-depth (RGB-D) cameras and lens models
such as lens models pin-hole and fisheye. ORB-SLAM3 uses Atlas for accurate, smooth map merging,
location recognition, camera relocalization, and loop closure. The system creates a unique DBoW2 [49],
keyframe database for relocalization, loop closure, and map merging. ORB SLAM3 is as robust as the best
systems available in the literature and significantly more accurate. ORB-SLAM3 point of failure is low-
texture environments.
Some AR-based indoor navigation systems enhance users’ spatial learning besides leading them to
their destinations safely and quickly. To improve positioning accuracy, bayesian estimation and the k-nearest
neighbors (KNN) algorithm are used [50]. Recently, high-frequency radio frequency identification (HF RFID)
integrated with kalman filtering and tukey smoothing to improve indoor tracking accuracy [51]. Liu and Meng
[52], the interface for indoor navigation is designed on HoloLens. The arrows are used to aid orientation.
Semantic meanings in icons with text can assist as virtual marks and help with spatial learning. There are many
feature extractors like SURF, gradient location and orientation histogram (GLOH), and SIFT [53], that are most
fitting for applications such as image recognition, which are used with marker-based applications. After that
appeared research based on building a system on a modified version of the SURF algorithm [54], that is used to
extract the real-word features and track objects. Tracking of items handling with the projection (pose) matrix
was computed from the extracted features by homography techniques. The advanced algorithm calculates the
center pose and visualizes a 3D model over the various image from the standard data set. It was confirmed to be
useful and practical in marker-less mobile augmented reality. The advanced algorithm is not applied in a real
application, only used on the data set [55]. The indoor tracking is used in the archaeological areas to improve
the tourist experience. MAGIC-EYES guidance system [56], uses augmented reality technology. They are using
sensors on the mobile phone as a camera and gyroscope. The markers such as plaques, stone tablets, and
buildings patterns also have the function to identify the images of recognizable objects, the viewing direction of
tourists, and the geographical location information. The pilot study applied to the traditional guidance system
and the developed augmented reality guidance system on twelve clients. The test results showed that the
MAGIC-EYES system is much better than traditional methods.
Complementing the development process in the indoor tracking HyMoTrack system [57] is
developed, where the mapping phase was created upon small SLAM maps, including a wide-angle camera.
The SLAM maps and 2D feature markers have been integrated into a global reference map. The markers also
have the function of discovering the client's start position if no previous experience is available. While the
discovered image marker delivers a position, the SLAM thread operates in parallel to find the match on the
sub-map. The planning algorithm A* is used to calculate a path between two points. The blender is used to
putting 3D content for augmented reality visualization. During detecting the path, the random sample
consensus (RANSAC) algorithm is used to remove outlier features that are not essential areas of the
environment. so, it did not solve a problem when using a SLAM algorithm. The HyMoTrack system which
depends on a visual hybrid tracking strategy, was enhanced in this research. The enhanced system had a 3D
model generation algorithm executed, which automatically generates a 3D mesh out of a vectorized 2D floor
plan. field of view path (FOVPath) technique intends to respond not exclusively to the client's location and
the target, the performance of visual positioning systems can be adjusted [58]. On the other hand, FOVPath is
reliant on the view direction and the field of view (FOV) capacities of the preowned gadget.
Furthermore, the detection algorithm was created to work without any previous knowledge like a
layer name or even metadata of different objects. Recognized figures are stored to generate a library for
additional research. Likewise, finishing the task varies significantly among the A* based straight path and
FOVPath. A median of A* is 33:88 seconds is estimated for achieving the mission through a straight path in
adverse 23:32 seconds for the FOVPath [59].
With enhanced augmented reality technologies like object recognition and computer vision,
location-based augmented reality becomes more interactive. This application comprises three major part's
database, search engine, and output engine. Application imports screen captures and client's area data into a
search engine to match with the database as. Afterwards, the output engine generates the match results,
including the augmented reality components, 3D model, and fascination data, on the screen of versatile
devices. The characteristics of the campus attractions are extracted with SIFT and stored in a one-
6. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
407
dimensional vector. SIFT utilizes the edge and corner places of a picture to recognize images in various
circumstances, including rotated or twisted images.
The output engine lays a 3D model in a particular spot base on a convolutional neural network
(CNN) model. CNN is used to decide the specific position of the 3D models in each point of attraction. The
application accomplishes an attraction recognition accuracy rate of 90%. The convolution process takes a lot
of time, unless we eliminated the training process [60]. As shown in Figure 3.
Figure 3. Architecture of output engine
Recently, the building information modelling (BIM) tracker system [61], based on localization
performed by matching image flows captured using a camera with a 3D model of the building is used as a
visual tracking framework. The study's principal hypothesis is that centimeter-level precision in localization
can be performed without any drift using image information with a 3D building model. The camera's pose is
determined by the algorithm of Gauss-Newton with least-squares minimization, which frequently minimizes
the re-projection errors in an M-estimator sample consensus (MSAC) framework. By using the ray-tracing
algorithm of blender, the virtual view is provided, and noticeable edges, just as their corresponding 3D
coordinates in the BIM coordinate system, are defined. The canny edge detector is used to identify the edges
in the image. After that, the 3D points are sampled and back-projected on the image plane. Similarities are
formed by searching for edges on the image in the straight direction of the back-projected model edges of the
sampled points. The MSAC estimator is used to eliminating the incorrect 3D to 2D similarities, which can
affect shadows or reduplicate texture. Although the direct solution is faster, the iterative method provides
more exact estimates. So, we applied the iterative Gauss-Newton approach. Recently systems used advanced
feature tracking and augmented reality methods through navigation. Features gain of 3D point cloud
localization demands a pre-deployment stage, where the indoor environment must be 3D scanned and stored
as anchors. In a database, these anchors correlated with their corresponding locations and navigation-related
data superimposed on visual feed through the navigation assistant process.
Based on the anchors recognized of the camera and sensors feed, the client's current position and
orientation are determined. After the routes were completely examined, the images were transported to
ARCore SDK for AR data overlay. The A* algorithm is used to compute the shortest path in the system [3].
To continually improve with performance and simplify indoor navigation. The next proposed model aims to
decrease the use of hardware components and other technologies like artificial intelligence and deep learning,
in the track of navigation and alternately use cloud with augmented reality. When the application starts, the
system will progress using the sound message of the user and which should be launched by the destination,
then the virtual path is loaded using the anchors and the virtual arrow signs will guide the user to his
destination. The user has to should point his camera of the phone throw the anchors and a sound will erupt
when the user walks throw the anchors. Furthermore, after passing each anchor the next anchor begins
erupting sound until the user arrives at his destination [62]. To apply passive tracking, the following
architecture must be followed as shown in Figure 4.
7. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
408
Figure 4. Modular architecture of passive tracking
2.2. Active tracking
At the beginning of the researches on this point, RFID positioning [44], augmented reality and
portable route are integrated for building up a 3D expanded reality versatile route framework.
Wang et al. [17], hardware devices of the RFID indoor positioning system combine a practical RFID reader
and tag. The tag used in the RFID positioning method enables the system to obtain information on individual
locations. The navigation information server method sends the navigation information at the exhibition area
of visitors to their mobile devices. This system's message-markerless augmented reality technology could be
combined with the positioning to enhance image recognition performance.
Zegeye et al. [63], the new trend in indoor tracking is both Wi-Fi and GIS. It looked through
merging between mobile AR technology and Wi-Fi positioning technology. The researchers strive to develop
this technology by establishing applying Wi-Fi received signal strength (RSS) in the considered indoor
condition to construct radio maps utilizing the Wi-Fi fingerprinting approach. Fingerprinting data gives a
coarse RSS estimation to the entirety of the reachable APs. The radio maps are stored as a file of 7×138
comma-separated values (CSV) RSS. When a client demands position estimation, an RSS estimation
gathered on-the-fly will be sent from the android-based gadget's customer android application using an
attachment to the server as an extensible markup language (XML) record. The received XML file will be
parsed at the server, and position estimation performed at the server (remote laptop computer) by running the
localization algorithm. The determined location value will be transfer as an XML file contains the expected
locations to the client's android application. The predicted location would be displayed for the user by the
identical application. The accuracy was achieved, and the system can determine the client position. At 67%
of the time over 42 recognition positions.
Then complementing the development process and scientific research to solve some problems and
improve the internal tracking process, geographic information systems, and sensor devices measurements
appeared. augmented reality engine application (AREA) framework [19], comprises a portable augmented
reality kernel that empowers location-based mobile augmented reality applications. AREA kernel shall
consist of three created algorithms, the tracking algorithm, the points of interest (POIs) algorithm, and the
clustering algorithm. Four specialized issues were vital when building up the kernel. POIs must effectively
show regardless of the gadget is held at a slant. Show POIs accurately and efficiently should be given to the
user. The idea of POI is coordinated with basic, versatile working operating system (OS) (iPhone operating
system (iOS), Android, and Windows Phone)). The kernel provides for dealing with points of interest
clusters. The idea of AREA to relate a client to the objects recognized in the camera show depends on five
aspects. A virtual 3D world utilizes to connect the client's location to one of the objects. The client is
positioned at the origin of this world. Rather than the physical camera, a virtual 3D camera that works with
the built virtual 3D world.
8. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
409
The different sensor features of the upheld mobile operating systems include allowing the virtual 3D
world. The physical camera of the cell phone changes into the virtual 3D camera depending on the estimation of
sensor data. Previous research developed and a system built an indoor tracking system to enhance low
positioning correctness and accuracy. The pedestrian tracking algorithm [18], uses indoor environment
restrictions within the grid-based indoor model to improve a Wi-Fi-based system's localization. Indoor space is
partitioned into grid cells that have a specific size and corresponding semantics. The precision of the grid model
relies upon the grid size. The pedestrian algorithm repeatedly estimates that the location probability with these
cells depends on the indoor and magnetometer measures toward a mobile cell. Tracking errors as ill-advised
areas, wrong heading, and jumps among consequent locations are determined using the Wi-Fi positioning
system, which causes a low dynamic tracking efficiency. To decrease predicting error, the tracking outcomes
estimate against measurements in each three tracking intervals. The grid filter is a discrete Bayesian filter that
probabilistically determines a target's position depending on measurements from sensors. Tracking system
estimate positions over time using the Markov chain model. The advanced tracking algorithm, which depends
on Wi-Fi positioning technology, can provide location precision at meter level 92% positions within 3.5 m of
error. To apply active tracking, the following architecture must be as shown in Figure 5.
Figure 5. Modular architecture of active tracking
2.3. Hybrid tracking
The hybrid indoor augmented reality tracking system integrates the virtual object into the same
position in real views. The feature extractor algorithms [23], are enhanced by merging FAST with SURF to
create FAST-SURF that can fit the demand for robustness and real-time performance. To define the mobile
phone position information achieved by using the fingerprinting technique algorithm. Set a fingerprint
position recognition database of a particular indoor environment to compare and match each detected access
point RSS values of detection points with stored records by applying the KNN algorithm and determining the
position value.
Build a system [5], that aims to achieve automatic people tracking system that provides mapping
Wi-Fi networks to determine people's position. The system consists of multi-agent, which enables the control
of both the trolley and client detection agent's hardware, which is responsible for detecting and calibrating the
client. The main job of the trolley is to follow the clients during their shopping process. The development of
scientific research reduces the percentage of error in accessing positions and the emergence of mapping Wi-
Fi networks. The data on vehicle transfer was adopted to capture signal maps to reduce the need to perform
manual calibration and, therefore, enhance data updating. A Bayesian network classifier was applied for
determined The final position using combining data provided by wireless networks. An obstacle detection
agent eliminates collision during the trolley is moving. HC-SR04 distance sensors were used to detect the
obstacles. Tablet is further responsible for scanning the Wi-Fi networks and the beacons utilizing a USB
9. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
410
adapter of the Edimax brand, EW-7611UL model, including Wi-Fi and Bluetooth. Database organization that
has a data agent which is in charge of controlling the data in it.
Wu et al. [24], augmented reality, deep learning and the cloud are used to solve the difficulty of
GPS that does not work well for indoor navigation. The user is asked to switch on the camera phone to scan
the surrounding environment to the database. The Python recognition system will take the first data in the
database and match it with the image recognition module [64]. The images are uploaded to the back-end,
then it can match with the features of You only look once, version 3 (Yolov3) [65]. The images are processed
by binarization, contour detection, and cut redundant edges by scaling [66], [67]. The KNN nearest neighbor
algorithm was used to remove unnecessary features in an earlier trained module for digit recognition
[68]–[70], and determine the user's initial location positioning [71], [72]. A* search algorithm [73], [74],
determines the shortest route from the initial point to the destination and suggests it to the users [75], [76].
Through indoor navigation, users can use AR electronic bulletin boards at a particular location to present
relevant information about the location [77]. The common characteristics of most indoor tracking systems are
shown in Table 3.
Table 3. Characteristics of most indoor tracking systems
Indoor Outdoor
With low light
intensity
Work in
multi -level
Use predefined
map
Use hardware
requirements
Marker Markerless Cloud
√ √ NA √ NA √ √ NA NA
√ NA √ NA NA √ NA √ NA
√ NA NA √ √ NA √ NA √
√ NA NA NA NA NA √ NA NA
√ NA √ √ NA √ NA √ NA
√ NA NA √ √ NA √ NA NA
√ NA √ NA NA NA NA √ NA
√ √ NA √ NA √ √ NA NA
√ √ NA √ NA NA √ NA NA
√ NA NA √ √ NA NA NA NA
√ NA NA √ √ NA NA NA NA
√ NA √ NA NA √ NA √ NA
√ NA NA √ NA √ NA √ √
√ NA NA √ √ NA NA NA √
√ NA NA √ √ NA NA NA √
√ NA NA √ NA NA NA √ √
3. RESULTS AND DISCUSSION
The main objective of using indoor tracking systems is to navigate people through complex and
unfamiliar environments. Tracking systems enable users to reach their desired destination, with minimum
congestion and time consumption. The adapted techniques in outdoor tracking can not work indoor tracking
for the extra challenges such as color intensity and GPS absence. The tracking can be divided into active,
passive, and hybrid tracking.
Active tracking systems depend on using one or more communication technologies, such as Wi-Fi
or Bluetooth. To reduce the computational complexity, researchers try to improve the connectivity and the
localization accuracy. Researchers usually use the weighted centroid algorithm (WCL) to enhance the indoor
tracking. Active indoor tracking is commonly used for low cost systems. Moreover, it solves the tracking
problem in low light intensity. The current challenge in active indoor tracking is its low accuracy in complex
environments with multiple walls and floors, as it affects the signal's strength.
Passive tracking systems depend on using one or more image detection algorithms. Systems use pre-
loaded maps to enhance users' navigation. These techniques usually use the route planner algorithm. This
algorithm calculates the distance between the current location and the target, to reduce the calculation of the
real-time tracking. Passive indoor tracking systems are usually used to solve the problem of complex
environments. However, low light intensity can affect passive indoor tracking systems' accuracy.
Hybrid tracking systems integrate communication technologies with image detection algorithms in
one system. The communication technology module calculates the position and orientation of the mobile.
While, the image detection module extracts and matches feature points, between the current frame and offline
environment images. The virtual content about route tracking overlay the real-time camera's view. The
advantages of hybrid indoor tracking systems are to enhance the image based tracking with sensors' data.
However, its signal strength is affected in complex environment, as in active tracking. We noticed, the active
tracking is better to use when the indoor environment is covered by Wi-Fi and use the fingerprinting
10. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
411
technique algorithm. Passive tracking is more convenient using when environment maps exist to improve
tracking accuracy with image detection and computer vision algorithms.
Usually, hybrid tracking is used when the environment is covered with Wi-Fi and merged with the
algorithms of image detection and computer vision. Building a hybrid tracking model is more effective and
accurate. Hybrid tracking works in low-light intensity environments because it depends on Wi-Fi RSS
fingerprinting to determine the user's location. In addition to making it more effective for the user through
image detection algorithms to view actual user tracking in a real environment. The Table 4 shows when to
use the three types of tracking techniques.
Table 4. Indoor tracking systems features
Tracking technique Tracking space Physical world parameters User perspective
Active Small space
Single floor
Not affected by the light intensity
Affected by the occlusion
Not need predefined map
System needs extra hardware
Easy to scalability
Passive Complex space
Multiple floor
Affected by the light intensity
Not affected by the occlusion
Need predifiend map
System not need extra hardware
Hard to scalability
Hybrid Complex space
Multiple floor
Not affected by the light intensity
Not affected by the occlusion
Need predefined map
System needs extra hardware
Hard to scalability
As shown in Table 4, when the user environment is small, a single floor prefers to use active
tracking. When the user environment is complex, multiple floors prefer to use passive tracking. When
lighting intensity in the environment is slightly or almost non-existent, it is preferable to use active or hybrid
tracking. The occlusion in complex environments with multiple walls and floors affects the system's accuracy
in active tracking. The cost of active and hybrid tracking is high due to the need for hardware components.
The scalability in active tracking is easier than passive and hybrid tracking because it does not require
predefined maps.
4. CONCLUSION
Indoor tracking systems are usually implemented based on communication technologies or image
detection algorithms. This paper demonstrates various indoor tracking systems for augmented reality
applications. We compared indoor tracking systems developed based on various communication
technologies. Similarly, we compared indoor tracking systems developed based on image detection
algorithms. This paper also provides a comprehensive review of the recent relevant applications from 2011 to
2021. Developing active tracking is very stable, with widespread use, computationally inexpensive, and it
provides systems that can integrate with 3D models. Active tracking is usually faulty for applications that
need accurate tracking and registration. On the other hand, passive tracking can apply more reliable and gives
more accurate pose estimation and be tracking. Passive tracking is computationally expensive and needs
powerful hardware. For the hybrid tracking, both sensor-based tracking and vision-based tracking are
integrated to overcome the limitations of each technique. Recently, hybrid-based methods produce accurate
tracking that can be run on handheld devices.
REFERENCES
[1] J. C. K. Chow, M. Peter, M. Scaioni, and M. Al-Durgham, “Indoor tracking, mapping, and navigation: algorithms, technologies,
and applications,” Journal of Sensors, vol. 2018, pp. 1–3, 2018, doi: 10.1155/2018/5971752.
[2] K. Shaaban, “Drivers’ perceptions of smartphone applications for real-time route planning and distracted driving prevention,”
Journal of Advanced Transportation, vol. 2019, pp. 1–10, Mar. 2019, doi: 10.1155/2019/2867247.
[3] P. K. G H, A. N, A. R, and M. P, “Indoor navigation using AR technology,” International Journal of Engineering Applied
Sciences and Technology, vol. 04, no. 09, pp. 356–359, 2020, doi: 10.33564/ijeast.2020.v04i09.045.
[4] A. Hameed and H. A. Ahmed, “Survey on indoor positioning applications based on different technologies,” 2019,
doi: 10.1109/MACS.2018.8628462.
[5] A. S. Mendes, G. Villarrubia, J. Caridad, D. H. De La Iglesia, and J. F. De Paz, “Automatic wireless mapping and tracking system
for indoor location,” Neurocomputing, vol. 338, pp. 372–380, Apr. 2019, doi: 10.1016/j.neucom.2018.07.084.
[6] Jie Yin, “Bringing nature indoors with virtual reality: human reponses to Biophilic design in buildings,” Harvard University,
2019.
[7] A. Fiore, L. Mainetti, L. Manco, and P. Marra, “Augmented reality for allowing time navigation in cultural tourism experiences: a
case study,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
11. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
412
in Bioinformatics), 2014, vol. 8853, pp. 296–301, doi: 10.1007/978-3-319-13969-2_22.
[8] J. Kim, B. Yoo, and H. Ko, “Tour cloud mobile: helping tourists acquire the information effectively using three types of views,” in
2015 IEEE International Conference on Consumer Electronics, ICCE 2015, 2015, pp. 673–674, doi: 10.1109/ICCE.2015.7066573.
[9] G. Villarrubia, J. Bajo, J. F. De Paz, and J. M. Corchado, “Monitoring and detection platform to prevent anomalous situations in
home care,” Sensors (Switzerland), vol. 14, no. 6, pp. 9900–9921, 2014, doi: 10.3390/s140609900.
[10] L. Calderoni, M. Ferrara, A. Franco, and D. Maio, “Indoor localization in a hospital environment using random forest classifiers,”
Expert Systems with Applications, vol. 42, no. 1, pp. 125–134, 2015, doi: 10.1016/j.eswa.2014.07.042.
[11] T. Nishimura, K. Koji, Y. Nishida, and H. Mizoguchi, “Development of a nursing care support system that seamlessly monitors
both bedside and indoor locations,” Procedia Manufacturing, vol. 3, pp. 4906–4913, 2015, doi: 10.1016/j.promfg.2015.07.623.
[12] E. M. Gorostiza, J. L. Lázaro Galilea, F. J. Meca Meca, D. Salido Monzú, F. Espinosa Zapata, and L. Pallarés Puerto, “Infrared
sensor system for mobile-robot positioning in intelligent spaces,” Sensors, vol. 11, no. 5, pp. 5416–5438, May 2011,
doi: 10.3390/s110505416.
[13] D. Gu and K.-S. Chen, “Design and performance evaluation of wiimote-based two-dimensional indoor localization systems for
indoor mobile robot control,” Measurement, vol. 66, pp. 95–108, Apr. 2015, doi: 10.1016/j.measurement.2015.01.009.
[14] M. Bower, C. Howe, N. McCredie, A. Robinson, and D. Grover, “Augmented reality in education – cases, places and potentials,”
Educational Media International, vol. 51, no. 1, pp. 1–15, Jan. 2014, doi: 10.1080/09523987.2014.889400.
[15] F. Debandi et al., “Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing
using immersive devices,” IOP Conference Series: Materials Science and Engineering, vol. 364, no. 1, 2018, doi: 10.1088/1757-
899X/364/1/012048.
[16] H.-S. Gang and J.-Y. Pyun, “A smartphone indoor positioning system using hybrid localization technology,” Energies, vol. 12,
no. 19, p. 3702, Sep. 2019, doi: 10.3390/en12193702.
[17] C. S. Wang, D. J. Chiang, and Y. Y. Ho, “3D augmented reality mobile navigation system supporting indoor positioning
function,” in Proceeding - 2012 IEEE International Conference on Computational Intelligence and Cybernetics, CyberneticsCom
2012, 2012, pp. 64–68, doi: 10.1109/CyberneticsCom.2012.6381618.
[18] W. Xu, L. Liu, S. Zlatanova, W. Penard, and Q. Xiong, “A pedestrian tracking algorithm using grid-based indoor model,”
Automation in Construction, vol. 92, pp. 173–187, Aug. 2018, doi: 10.1016/j.autcon.2018.03.031.
[19] R. Pryss, M. Schickler, J. Schobel, M. Weilbach, P. Geiger, and M. Reichert, “Enabling tracks in location-based smart mobile
augmented reality applications,” Procedia Computer Science, vol. 110, pp. 207–214, 2017, doi: 10.1016/j.procs.2017.06.086.
[20] L. C. Huey, P. Sebastian, and M. Drieberg, “Augmented reality based indoor positioning navigation tool,” in 2011 IEEE
Conference on Open Systems, Sep. 2011, pp. 256–260, doi: 10.1109/ICOS.2011.6079276.
[21] A. Mulloni, D. Wagner, I. Barakonyi, and D. Schmalstieg, “Indoor positioning and navigation with camera phones,” IEEE
Pervasive Computing, vol. 8, no. 2, pp. 22–31, 2009, doi: 10.1109/MPRV.2009.30.
[22] R. Yadav, H. Chugh, V. Jain, and P. Baneriee, “Indoor navigation system using visual positioning system with augmented
reality,” in 2018 International Conference on Automation and Computational Engineering (ICACE), Oct. 2018, pp. 52–56,
doi: 10.1109/ICACE.2018.8687111.
[23] X. Yan, W. Liu, and X. Cui, “Research and application of indoor guide based on mobile augmented reality system,” in 2015
International Conference on Virtual Reality and Visualization (ICVRV), Oct. 2015, pp. 308–311, doi: 10.1109/ICVRV.2015.48.
[24] J. H. Wu, C. T. Huang, Z. R. Huang, Y. B. Chen, and S. C. Chen, “A rapid deployment indoor positioning architecture based on
image recognition,” in 2020 IEEE 7th International Conference on Industrial Engineering and Applications, ICIEA 2020, 2020,
pp. 784–789, doi: 10.1109/ICIEA49774.2020.9102083.
[25] M. Martínez del Horno, L. Orozco-Barbosa, and I. García-Varea, “A smartphone-based multimodal indoor tracking system,”
Information Fusion, vol. 76, pp. 36–45, Dec. 2021, doi: 10.1016/j.inffus.2021.05.001.
[26] M. Kirkko-Jaakkola, J. Collin, and J. Takala, “Using building plans and self-contained sensors with GNSS initialization for indoor
navigation,” in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), Jun. 2013, pp. 1–5,
doi: 10.1109/VTCSpring.2013.6692802.
[27] S. Aikawa, S. Yamamoto, and M. Morimoto, “WLAN finger print localization using deep learning,” in 2018 IEEE Asia-Pacific
Conference on Antennas and Propagation (APCAP), Aug. 2018, pp. 541–542, doi: 10.1109/APCAP.2018.8538306.
[28] A. Satan, “Bluetooth-based indoor navigation mobile system,” in 2018 19th International Carpathian Control Conference
(ICCC), May 2018, pp. 332–337, doi: 10.1109/CarpathianCC.2018.8399651.
[29] T. Boros, “Survey of indoor navigation solutions,” Multidiszciplináris Tudományok, vol. 10, no. 2, pp. 84–89, 2020,
doi: 10.35925/j.multi.2020.2.11.
[30] D. Esslinger, M. Oberdorfer, M. Zeitz, and C. Tarin, “Improving ultrasound-based indoor localization systems for quality
assurance in manual assembly,” in Proceedings of the IEEE International Conference on Industrial Technology, 2020, vol. 2020-
Febru, pp. 563–570, doi: 10.1109/ICIT45562.2020.9067248.
[31] A. S, “Implementation of indoor navigation system using infrared LEDs and receivers,” International Journal of Pure and
Applied Mathematics, vol. 2018, no. 24, 2018.
[32] P. Verma, K. Agrawal, and V. Sarasvathi, “Indoor navigation using augmented reality,” in Proceedings of the 2020 4th
International Conference on Virtual and Augmented Reality Simulations, Feb. 2020, pp. 58–63, doi: 10.1145/3385378.3385387.
[33] B. R. Naidu, P. L. Rao, M. S. P. Babu, and K. V. L. Bhavani, “Efficient case study for image edge gradient based detectors-sobel,
robert cross, prewitt and canny,” International Journal of Electronics Communication and Computer Engineering, vol. 3, no. 3,
pp. 561–571, 2012.
[34] T. Lindeberg, “Scale selection properties of generalized scale-space interest point detectors,” Journal of Mathematical Imaging
and Vision, vol. 46, no. 2, pp. 177–210, 2013, doi: 10.1007/s10851-012-0378-3.
[35] G. Tang, Z. Liu, and J. Xiong, “Distinctive image features from illumination and scale invariant keypoints,” Multimedia Tools
and Applications, vol. 78, no. 16, pp. 23415–23442, 2019, doi: 10.1007/s11042-019-7566-8.
[36] M. Ramadan, M. El Tokhey, A. Ragab, T. Fath-Allah, and A. Ragheb, “Adopted image matching techniques for aiding indoor
navigation,” Ain Shams Engineering Journal, vol. 12, no. 4, pp. 3649–3658, 2021, doi: 10.1016/j.asej.2021.04.029.
[37] D. Rondao, N. Aouf, M. A. Richardson, and O. Dubois-Matra, “Benchmarking of local feature detectors and descriptors for
multispectral relative navigation in space,” Acta Astronautica, vol. 172, pp. 100–122, 2020, doi: 10.1016/j.actaastro.2020.03.049.
[38] K. Curran, “Hybrid passive and active approach to tracking movement within indoor environments,” IET Communications,
vol. 12, no. 10, pp. 1188–1194, 2018, doi: 10.1049/iet-com.2017.1099.
[39] M. F. bin Abdul Malek, P. Sebastian, and M. Drieberg, “Augmented reality assisted localization for indoor navigation on
embedded computing platform,” in 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA),
Sep. 2017, pp. 111–116, doi: 10.1109/ICSIPA.2017.8120589.
12. Int J Artif Intell ISSN: 2252-8938
Survey of indoor tracking systems using augmented reality (Ashraf Saad Shewail)
413
[40] L. Jinyu, Y. Bangbang, C. Danpeng, W. Nan, Z. Guofeng, and B. Hujun, “Survey and evaluation of monocular visual-inertial
SLAM algorithms for augmented reality,” Virtual Reality and Intelligent Hardware, vol. 1, no. 4, pp. 386–410, 2019,
doi: 10.1016/j.vrih.2019.07.002.
[41] B. Al Delail, L. Weruaga, M. J. Zemerly, and J. W. P. Ng, “Indoor localization and navigation using smartphones augmented
reality and inertial tracking,” in Proceedings of the IEEE International Conference on Electronics, Circuits, and Systems, 2013,
pp. 929–932, doi: 10.1109/ICECS.2013.6815564.
[42] S. Kasprzak, A. Komninos, and P. Barrie, “Feature-based indoor navigation using augmented reality,” in Proceedings - 9th
International Conference on Intelligent Environments, IE 2013, 2013, pp. 100–107, doi: 10.1109/IE.2013.51.
[43] S. Broumi, A. Bakal, M. Talea, F. Smarandache, and L. Vladareanu, “Applying Dijkstra algorithm for solving neutrosophic
shortest path problem,” in International Conference on Advanced Mechatronic Systems, ICAMechS, 2016, vol. 0, pp. 412–416,
doi: 10.1109/ICAMechS.2016.7813483.
[44] G. Gerstweiler, H. Kaufmann, O. Kosyreva, C. Schonauer, and E. Vonach, “Parallel tracking and mapping in Hofburg Festsaal,”
in 2013 IEEE Virtual Reality (VR), Mar. 2013, pp. 1–2, doi: 10.1109/VR.2013.6549441.
[45] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE
Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015, doi: 10.1109/TRO.2015.2463671.
[46] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in 2011
IEEE International Conference on Robotics and Automation, May 2011, no. 1, pp. 3607–3613, doi: 10.1109/ICRA.2011.5979949.
[47] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” in Proceedings of the
IEEE International Conference on Computer Vision, 2011, pp. 2564–2571, doi: 10.1109/ICCV.2011.6126544.
[48] C. Campos, R. Elvira, J. J. G. Rodriguez, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM3: an accurate open-source library for
visual, visual-inertial, and multimap SLAM,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021,
doi: 10.1109/TRO.2021.3075644.
[49] D. Galvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on
Robotics, vol. 28, no. 5, pp. 1188–1197, Oct. 2012, doi: 10.1109/TRO.2012.2197158.
[50] H. Xu, Y. Ding, P. Li, R. Wang, and Y. Li, “An RFID indoor positioning algorithm based on bayesian probability and K-nearest
neighbor,” Sensors (Switzerland), vol. 17, no. 8, 2017, doi: 10.3390/s17081806.
[51] A. A. Nazari Shirehjini and S. Shirmohammadi, “Improving accuracy and robustness in HF-RFID-based indoor positioning with
Kalman filtering and Tukey smoothing,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 11, pp. 9190–9202,
2020, doi: 10.1109/TIM.2020.2995281.
[52] B. Liu and L. Meng, “Doctoral Colloquium-towards a better user interface of augmented reality based indoor navigation
application,” in Proceedings of 6th International Conference of the Immersive Learning Research Network, iLRN 2020, 2020,
pp. 392–394, doi: 10.23919/iLRN47897.2020.9155198.
[53] J. Tomaszewski Michałand Osuchowski and Ł. Debita, “Effect of spatial filtering on object detection with the SURF algorithm,”
in Advances in Intelligent Systems and Computing, vol. 720, 2018, pp. 121–140.
[54] R. Padilla, W. L. Passos, T. L. B. Dias, S. L. Netto, and E. A. B. Da Silva, “A comparative analysis of object detection metrics
with a companion open-source toolkit,” Electronics (Switzerland), vol. 10, no. 3, pp. 1–28, 2021,
doi: 10.3390/electronics10030279.
[55] E. N. G. Weng, R. U. Khan, S. A. Z. Adruce, and O. Y. Bee, “Objects tracking from natural features in mobile augmented
reality,” Procedia - Social and Behavioral Sciences, vol. 97, pp. 753–760, 2013, doi: 10.1016/j.sbspro.2013.10.297.
[56] X. Wei, D. Weng, Y. Liu, and Y. Wang, “A tour guiding system of historical relics based on augmented reality,” in 2016 IEEE
Virtual Reality (VR), Mar. 2016, vol. 2016-July, pp. 307–308, doi: 10.1109/VR.2016.7504776.
[57] G. Gerstweiler, E. Vonach, and H. Kaufmann, “HyMoTrack: a mobile AR navigation system for complex indoor environments,”
Sensors (Switzerland), vol. 16, no. 1, 2015, doi: 10.3390/s16010017.
[58] A. Xiao, R. Chen, D. Li, Y. Chen, and D. Wu, “An indoor positioning system based on static objects in large indoor scenes by
using smartphone cameras,” Sensors, vol. 18, no. 7, p. 2229, Jul. 2018, doi: 10.3390/s18072229.
[59] G. Gerstweiler, “Guiding people in complex indoor environments using augmented reality,” in 25th IEEE Conference on Virtual
Reality and 3D User Interfaces, VR 2018 - Proceedings, 2018, pp. 801–802, doi: 10.1109/VR.2018.8446138.
[60] C.-H. Lin, Y. Chung, B.-Y. Chou, H.-Y. Chen, and C.-Y. Tsai, “A novel campus navigation APP with augmented reality and deep
learning,” in 2018 IEEE International Conference on Applied System Invention (ICASI), Apr. 2018, pp. 1075–1077,
doi: 10.1109/ICASI.2018.8394464.
[61] D. Acharya, M. Ramezani, K. Khoshelham, and S. Winter, “BIM-tracker: a model-based visual tracking approach for indoor
localisation using a 3D building model,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 150, pp. 157–171, 2019,
doi: 10.1016/j.isprsjprs.2019.02.014.
[62] V. Dosaya, S. Varshney, V. K. Parameshwarappa, A. Beniwal, and S. Tak, “A low cost augmented reality system for wide area
indoor navigation,” in 2020 International Conference on Decision Aid Sciences and Application, DASA 2020, 2020, pp. 190–195,
doi: 10.1109/DASA51403.2020.9317014.
[63] W. K. Zegeye, S. B. Amsalu, Y. Astatke, and F. Moazzami, “WiFi RSS fingerprinting indoor localization for mobile devices,” in
2016 IEEE 7th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), Oct. 2016, pp. 1–6,
doi: 10.1109/UEMCON.2016.7777834.
[64] B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-
driven networks for image classification,” Frontiers in Neuroscience, vol. 11, no. DEC, Dec. 2017, doi: 10.3389/fnins.2017.00682.
[65] R. Huang, J. Gu, X. Sun, Y. Hou, and S. Uddin, “A rapid recognition method for electronic components based on the improved
YOLO-V3 network,” Electronics (Switzerland), vol. 8, no. 8, 2019, doi: 10.3390/electronics8080825.
[66] R. Song, Z. Zhang, and H. Liu, “Edge connection based canny edge detection algorithm,” Journal of Information Hiding and
Multimedia Signal Processing, vol. 8, no. 6, pp. 1228–1236, 2017.
[67] D. Sundani, S. Widiyanto, Y. Karyanti, and D. T. Wardani, “Identification of image edge using quantum canny edge detection
algorithm,” Journal of ICT Research and Applications, vol. 13, no. 2, pp. 133–144, 2019, doi: 10.5614/itbj.ict.res.appl.2019.13.2.4.
[68] S. B. Imandoust and M. Bolandraftar, “Application of K-nearest neighbor ( KNN ) approach for predicting economic events :
theoretical background,” Int. Journal of Engineering Research and Applications, vol. 3, no. 5, pp. 605–610, 2013.
[69] S. Indolia, A. K. Goswami, S. P. Mishra, and P. Asopa, “Conceptual understanding of convolutional neural betwork- a deep
learning approach,” Procedia Computer Science, vol. 132, pp. 679–688, 2018, doi: 10.1016/j.procs.2018.05.069.
[70] A. Kristiadi and Pranowo, “Deep convolutional level set method for image segmentation,” Journal of ICT Research and
Applications, vol. 11, no. 3, pp. 284–298, 2017, doi: 10.5614/itbj.ict.res.appl.2017.11.3.5.
[71] H. Zhao, W. Cheng, N. Yang, S. Qiu, Z. Wang, and J. Wang, “Smartphone-based 3D indoor pedestrian positioning through multi-
modal data fusion,” Sensors (Switzerland), vol. 19, no. 20, 2019, doi: 10.3390/s19204554.
13. ISSN: 2252-8938
Int J Artif Intell, Vol. 12, No. 1, March 2023: 402-414
414
[72] B. Yenigun, I. R. Karas, and E. Demiral, “Real-time navigation system implementation within the scope of 3D geographic
information systems for multi storey car park,” International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences - ISPRS Archives, vol. 42, no. 2W1, pp. 115–120, 2016, doi: 10.5194/isprs-archives-XLII-2-W1-115-2016.
[73] S. K. Sharma and B. L. Pal, “Shortest path searching for road network using A * algorithm,” International Journal of Computer
Science and Mobile Compfile, vol. 4, no. 7, pp. 513–522, 2015.
[74] G. Deepa, P. Kumar, A. Manimaran, K. Rajakumar, and V. Krishnamoorthy, “Dijkstra algorithm application: shortest distance
between buildings,” International Journal of Engineering & Technology, vol. 7, no. 4.10, p. 974, 2018,
doi: 10.14419/ijet.v7i4.10.26638.
[75] W. Li, Z. Yu, S. Wei, X. Lu, Y. Zhang, and T. Wang, “Custom Bus optimization of docking stations based on partition path
selection,” Advances in Mechanical Engineering, vol. 9, no. 10, 2017, doi: 10.1177/1687814017731544.
[76] X. Lei, Z. Zhang, and P. Dong, “Dynamic path planning of unknown environment based on deep reinforcement learning,” Journal
of Robotics, vol. 2018, 2018, doi: 10.1155/2018/5781591.
[77] H. S. Wallace, “Augmented reality: exploring its potential for extension,” Journal of Extension, vol. 56, no. 5, 2018.
BIOGRAPHIES OF AUTHORS
Ashraf Saad Shewail obtained his bachelor’s degree in computer science in May
2017. He is currently works as Teaching assistant at computer Science Department, Faculty of
Computers and Artificial intelligence, Benha University, Egypt. Currently, he is working on
his M.Sc. degree. His research interest’s concern: image processing, computer vision, machine
learning and artificial intelligence. He can be contacted at email: ashraf.saad@fci.bu.edu.eg.
Neven Abdelaziz Elsayed is assistant professor in Faculty of Science and
Innovation, Universities of Canada in Egypt. Neven accomplished a PhD in Computer Science
at University of South Australia. Neven awarded the Mike Miller Medal, as the outstanding
thesis of year 2017. ElSayed's academic qualifications include the following: B.A. and M.S. in
Computer Engineering, Benha University. She can be contacted at email:
nevine.alsayed@fci.bu.edu.eg.
Hala Helmy Zayed received the B.Sc. degree (Hons.) in electrical engineering
and the M.Sc. and Ph.D. degrees in electronics engineering from Benha University, in 1985,
1989, and 1995, respectively. She is currently a Professor at the Information Technology and
Computer Science Faculty, Nile University, Egypt. Her research interests include computer
vision, biometrics, machine learning, image forensics and image processing. She can be
contacted at email: hala.zayed@fci.bu.edu.eg, hhelmy@nu.edu.eg.