Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
Presentation "We are 2.0". The technology brings new value to all what we make in our life: more time and a new way to correlate. Let's use it intelligently!
Observed Technological Developments and Sustainability in PLE Diagrams - AE...Mehmet Emin Mutlu
In this study, constraints of start page services used by learners to create PLE are discussed and in order to get over these constraints a mobile application based PLE creation approach is suggested. In order to see the applicability of the suggested approach, an prototype software has been developed and user friendly specifications are added onto it. It is observed that, with the help of the newly developed mobile application, learners had a sustainable mobile learning environment.
This document presents a study comparing 3D walkthroughs created using a game engine to photo stitching techniques for developing virtual tours. The study aims to demonstrate that a 3D walkthrough of a university facility created in Unreal Engine 4 provides advantages over a photo stitching approach. These advantages include improved realism, interactivity, and ease of updating the virtual tour when the real environment changes. Background research on related virtual tour projects and the limitations of photo stitching techniques is also provided.
This document provides an overview of augmented reality (AR) and virtual reality (VR) technologies along the immersive experience continuum. It begins with a brief definition of VR and AR and discusses moving from basic VR environments towards more advanced immersive AR applications. Examples are given of educational uses of VR and emerging AR applications. Key challenges of AR development are outlined, including technical challenges of recognition and tracking and social challenges of adoption. Methods of accessing and creating basic AR content are described, from 360 cameras to target-based applications using tools like Vuforia and Unity. The document stresses the evolving nature of these technologies and considering appropriate educational applications.
The document discusses emerging technologies that can be used in classrooms, including virtual worlds, gaming, social networking, mobile devices, and more. It provides examples of how these technologies can be used educationally by motivating students and allowing them to learn collaboratively in simulated environments. Resources and tools are presented for using these technologies across various subject areas at different grade levels.
This document discusses using augmented reality to enhance education. It proposes developing an application that uses augmented reality to make the learning process more interactive and engaging. The application would recognize text or images from a book using the phone's camera. It would then display relevant 3D models, videos or additional information on the phone screen. This would make physical books more interactive and provide clarification of concepts without extensive screen time. The document outlines the proposed method, design and implementation using technologies like Vuforia, Unity, and Blender. It suggests this approach could help motivate students by providing immersive experiences and help them better visualize and understand concepts from textbooks.
Script for In Search of Educational Applications Using Augmented Reality Powe...Jonathan Bacon
This document includes the complete script for the "In Search of Educational Applications Using Augmented Reality" slideshow presented as a technology Brown Ba session at Johnson County Community College on Wednesday, April 11, 2012 (noon).
IRJET- University Campus Event Navigation System IRJET Journal
This document describes a proposed University campus event navigation system developed as an Android application. The application would allow administrators to add, edit, and delete events, and provide event details and locations to users. Users could register for events and receive real-time updates. The application would utilize GPS to display routes to event locations on a map. It aims to better inform students and others about campus events and make it easier to navigate to event venues. The proposed system was designed to address limitations of existing single-college applications that cannot be easily updated with location or other changes.
Presentation "We are 2.0". The technology brings new value to all what we make in our life: more time and a new way to correlate. Let's use it intelligently!
Observed Technological Developments and Sustainability in PLE Diagrams - AE...Mehmet Emin Mutlu
In this study, constraints of start page services used by learners to create PLE are discussed and in order to get over these constraints a mobile application based PLE creation approach is suggested. In order to see the applicability of the suggested approach, an prototype software has been developed and user friendly specifications are added onto it. It is observed that, with the help of the newly developed mobile application, learners had a sustainable mobile learning environment.
This document presents a study comparing 3D walkthroughs created using a game engine to photo stitching techniques for developing virtual tours. The study aims to demonstrate that a 3D walkthrough of a university facility created in Unreal Engine 4 provides advantages over a photo stitching approach. These advantages include improved realism, interactivity, and ease of updating the virtual tour when the real environment changes. Background research on related virtual tour projects and the limitations of photo stitching techniques is also provided.
This document provides an overview of augmented reality (AR) and virtual reality (VR) technologies along the immersive experience continuum. It begins with a brief definition of VR and AR and discusses moving from basic VR environments towards more advanced immersive AR applications. Examples are given of educational uses of VR and emerging AR applications. Key challenges of AR development are outlined, including technical challenges of recognition and tracking and social challenges of adoption. Methods of accessing and creating basic AR content are described, from 360 cameras to target-based applications using tools like Vuforia and Unity. The document stresses the evolving nature of these technologies and considering appropriate educational applications.
The document discusses emerging technologies that can be used in classrooms, including virtual worlds, gaming, social networking, mobile devices, and more. It provides examples of how these technologies can be used educationally by motivating students and allowing them to learn collaboratively in simulated environments. Resources and tools are presented for using these technologies across various subject areas at different grade levels.
This document discusses using augmented reality to enhance education. It proposes developing an application that uses augmented reality to make the learning process more interactive and engaging. The application would recognize text or images from a book using the phone's camera. It would then display relevant 3D models, videos or additional information on the phone screen. This would make physical books more interactive and provide clarification of concepts without extensive screen time. The document outlines the proposed method, design and implementation using technologies like Vuforia, Unity, and Blender. It suggests this approach could help motivate students by providing immersive experiences and help them better visualize and understand concepts from textbooks.
Script for In Search of Educational Applications Using Augmented Reality Powe...Jonathan Bacon
This document includes the complete script for the "In Search of Educational Applications Using Augmented Reality" slideshow presented as a technology Brown Ba session at Johnson County Community College on Wednesday, April 11, 2012 (noon).
IRJET- University Campus Event Navigation System IRJET Journal
This document describes a proposed University campus event navigation system developed as an Android application. The application would allow administrators to add, edit, and delete events, and provide event details and locations to users. Users could register for events and receive real-time updates. The application would utilize GPS to display routes to event locations on a map. It aims to better inform students and others about campus events and make it easier to navigate to event venues. The proposed system was designed to address limitations of existing single-college applications that cannot be easily updated with location or other changes.
This document provides a portfolio of projects by Sandeep Zechariah George K. It includes 13 projects in various fields including science, art, and technology. For each project there is a brief description of the motivation, process, challenges, and value added. The projects cover a range of areas such as cloud-based holter monitoring, shape displays, mobile application development, virtual and augmented reality, interactive toys, service design, and more. Skills developed include prototyping, software architecture, design, electronics, and more.
The document summarizes an event showcasing final year computing student projects at the National College of Ireland. It welcomes guests from industry to view student projects and meet with students. The projects cover various technologies and domains, with some having commercial potential. Students are congratulated on their success and encouraged to choose career or education paths after graduation.
CATWALKGRADER: A CATWALK ANALYSIS AND CORRECTION SYSTEM USING MACHINE LEARNIN...mlaij
In recent years, the modeling industry has attracted many people, causing a drastic increase in the number
of modeling training classes. Modeling takes practice, and without professional training, few beginners
know if they are doing it right or not. In this paper, we present a real-time 2D model walk grading app
based on Mediapipe, a library for real-time, multi-person keypoint detection. After capturing 2D positions
of a person's joints and skeletal wireframe from an uploaded video, our app uses a scoring formula to
provide accurate scores and tailored feedback to each user for their modeling skills.
The Interaction of the Object Focus Social Network Service Websitefuwax
Yuwei Fu's MFA thesis theory : The next step on improving social network website is to combine three concepts: objects, social network service, and physical interaction.
Members post desired items with special meanings from a particular region, and search for other members that are traveling from the particular region back to the user's location.
Travelers act like couriers by bringing back the desired items and the experience of the trip. Couriers not only fulfill receivers’ desires, but also bring back nostalgia and cures receivers’ homesickness.
LEARN PROGRAMMING IN VIRTUAL REALITY_ A PROJECT FOR COMPUTER SCIE.pdfssuser08e250
This document describes a virtual reality application called Computer Science Virtual Interactive Laboratory (CSVIL) that was developed to teach computer science concepts through interactive tutorials and simulations. The application contains three modules focused on sorting algorithms, programming paradigms, and lectures. Within the sorting algorithms module, students can access tutorials explaining key concepts through scrolling text, interactive animations that allow sorting arrays, and games to reinforce understanding. The goal of the application is to increase student engagement and learning outcomes for computer science topics through an immersive virtual reality environment. A pilot study was conducted to evaluate the user experience and effectiveness of the application.
The document is a thesis written by Muhammad Amir Irfan bin Mazian for his Bachelor's degree in Information Technology (Informatics Media) from Universiti Sultan Zainal Abidin in 2018. The thesis proposes developing an augmented reality mobile application called E-Library to provide an interactive experience for visitors of the university's main library. E-Library would allow users to explore a 3D model of the library and access information about its facilities in the real world using AR technologies like Unity and Vuforia. The goal is to make the library map and resources more engaging for users compared to traditional 2D printed maps.
The paper has been presented with a system that is
created on the android platform targeting the students studying in
an engineering institute. The application is created for effortless
day to day official work in an institute. Students will be served
with the benefits like compilation of branch wise question papers,
general aptitude questions, video lectures, newspapers, with
some interesting features like parent-teacher portal, feedback
system and ask your queries block. The application is designed
using core java coding, layout is fabricated with xml extension,
complete creation is done on android studio. Login authentication
is developed via Firebase Auth, newspapers have been linked
through their URL, video lectures are engrossed with API and
a separate website is created for parent-teacher interaction.
Based on the above mentioned ideology, We are fabricating an
application using android design studio kit which is majorly
concerned for the effortless access of all the essentials required
in an institute.
This document outlines the design of a single page application (SPA) called "Around the World: An Earthly Trivia" that is intended to be an educational geography and trivia game. It includes sections on problem definition, design specifications, top-down design diagrams, algorithm design, implementation, testing, and evaluation. The design specifies the game flow and user interactions, including a main menu, continent and difficulty selection, quiz questions with multiple choice answers, scoring, and hangman-style incorrect answer tracking. Algorithms are provided for key functions like selecting continents and difficulties, loading quizzes, checking answers, and updating scores. The goal is to create a fully functional SPA using JavaScript that both entertains and educ
This document provides an overview of Google Glass. It discusses how Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. The glasses will run on Android and allow hands-free access to information by communicating with the internet via voice commands. Key features will include a camera, GPS, motion sensors, and the ability to pull in augmented reality information from Google services to be displayed on the lenses. While the glasses are not meant to be worn constantly, they will function as a see-through computer monitor for accessing information as needed, similar to how smartphones are used.
The document provides an overview of a seminar report submitted by Prakhar Gupta on Google Glass. The report includes an introduction to concepts like virtual reality and augmented reality. It discusses the key technologies powering Google Glass like wearable computing, ambient intelligence and 4G. The report also covers the design and working of Google Glass and analyzes its advantages and disadvantages. It concludes with the future scope of augmented reality devices like Google Glass.
This document summarizes the creation of several geospatial visualization tools to represent trauma injury data in Cape Town, South Africa. Photographs were taken to recreate Google Street View panoramas of roads with high injury rates. The photos were stitched together using Hugin to create panoramic views. Google SketchUp was used to build 3D models of intersections based on the panoramas. Google Earth was used to create bar graphs of injury data for different suburbs. Balloons in Google Earth displayed the graphs and additional information about each location.
This document provides an overview of using Google Apps to enhance learning environments. It discusses how Google's mission is to organize information and make it accessible. It then explains the differences between Google accounts, Gmail accounts, and Google Apps accounts. The document encourages evaluating Chrome web apps to write reviews and sharing recommendations. It also lists several Google tools and resources for education, including Google Drive, Google+, YouTube Edu, and Google Sites. Overall, the document aims to showcase how Google Apps can be used to connect, collaborate, and empower student learning.
Increasing the Detail and Realism in Web3D Distributed WorldTELKOMNIKA JOURNAL
A complex and detailed Web3D world which represented the physical form of an institution is very difficult to be built. To simplify the work, raster images taken from the real structure were heavily utilized. However, this method has resulted in Web3D sites which were low on detail and having minimum level of realism. To overcome this deficiency, it is proposed to maximize the use of polygons. Experiment was done by re-developing the sample world with minimum use of raster images and applying polygons to 92% parts of the site. Site elements were also distributed to three servers to cope with bottleneck problem often occured when using only one server. The result was evaluated in a series of tests to see its viewing capabilities when displayed inside the web browser against various conditions, and it also evaluated in an acceptance test carried out by site users. The majority of testers felt immensely familiar with the details shown by the model as they were able to grab a more close-to-realistic experience like a real-world walk around inside the actual building complex. Problems that often occur when using only one server can also be reduced by using distributed world method.
The document describes a student capstone project that analyzes traffic data through collecting tweets, traffic camera images, and weather data to provide users with a traffic analysis for Cincinnati. Various techniques are used like sentiment analysis of tweets, image processing of traffic cameras to count cars, and analyzing weather patterns. A web application was created to display the current traffic analysis on a scale and provide additional context to help users understand the reported traffic status.
This document appears to be a project report for developing a travel planner application. It includes sections on introduction, literature review, problem statement, proposed solution, experimental setup and results analysis, conclusion, and future scope. The introduction provides an overview of the purpose of developing a smart travel planner app that integrates various travel related services to facilitate planning. The literature review explores several aspects of existing travel apps including user experience, mobile trends, personalization, social features, accessibility, and emerging technologies. The problem statement discusses the issues frequent travelers face when switching between different travel apps with limited storage. The remaining sections outline the proposed solution, testing process, findings, and potential for further development.
This is the story of how a small college with a department of 4 and a zero-based budget, developed a mobile solution that is affordable and provides vital information to future and current students, faculty, and staff.
An ar core based augmented reality campus navigation systemYangChang12
The document describes the design of an ARCore-based augmented reality campus navigation system. Key points:
1. The system uses ARCore to superimpose 3D virtual information onto the real environment, enhancing the user experience of campus navigation.
2. A visual inertial odometry algorithm is used for real-time indoor and outdoor localization on mobile devices.
3. The system is built with Unity3D for rich user interactions and to provide augmented reality content like 3D models, audio, and video during navigation.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
More Related Content
Similar to Web Scraper Utilizes Google Street View Images to Power a University Tour
This document provides a portfolio of projects by Sandeep Zechariah George K. It includes 13 projects in various fields including science, art, and technology. For each project there is a brief description of the motivation, process, challenges, and value added. The projects cover a range of areas such as cloud-based holter monitoring, shape displays, mobile application development, virtual and augmented reality, interactive toys, service design, and more. Skills developed include prototyping, software architecture, design, electronics, and more.
The document summarizes an event showcasing final year computing student projects at the National College of Ireland. It welcomes guests from industry to view student projects and meet with students. The projects cover various technologies and domains, with some having commercial potential. Students are congratulated on their success and encouraged to choose career or education paths after graduation.
CATWALKGRADER: A CATWALK ANALYSIS AND CORRECTION SYSTEM USING MACHINE LEARNIN...mlaij
In recent years, the modeling industry has attracted many people, causing a drastic increase in the number
of modeling training classes. Modeling takes practice, and without professional training, few beginners
know if they are doing it right or not. In this paper, we present a real-time 2D model walk grading app
based on Mediapipe, a library for real-time, multi-person keypoint detection. After capturing 2D positions
of a person's joints and skeletal wireframe from an uploaded video, our app uses a scoring formula to
provide accurate scores and tailored feedback to each user for their modeling skills.
The Interaction of the Object Focus Social Network Service Websitefuwax
Yuwei Fu's MFA thesis theory : The next step on improving social network website is to combine three concepts: objects, social network service, and physical interaction.
Members post desired items with special meanings from a particular region, and search for other members that are traveling from the particular region back to the user's location.
Travelers act like couriers by bringing back the desired items and the experience of the trip. Couriers not only fulfill receivers’ desires, but also bring back nostalgia and cures receivers’ homesickness.
LEARN PROGRAMMING IN VIRTUAL REALITY_ A PROJECT FOR COMPUTER SCIE.pdfssuser08e250
This document describes a virtual reality application called Computer Science Virtual Interactive Laboratory (CSVIL) that was developed to teach computer science concepts through interactive tutorials and simulations. The application contains three modules focused on sorting algorithms, programming paradigms, and lectures. Within the sorting algorithms module, students can access tutorials explaining key concepts through scrolling text, interactive animations that allow sorting arrays, and games to reinforce understanding. The goal of the application is to increase student engagement and learning outcomes for computer science topics through an immersive virtual reality environment. A pilot study was conducted to evaluate the user experience and effectiveness of the application.
The document is a thesis written by Muhammad Amir Irfan bin Mazian for his Bachelor's degree in Information Technology (Informatics Media) from Universiti Sultan Zainal Abidin in 2018. The thesis proposes developing an augmented reality mobile application called E-Library to provide an interactive experience for visitors of the university's main library. E-Library would allow users to explore a 3D model of the library and access information about its facilities in the real world using AR technologies like Unity and Vuforia. The goal is to make the library map and resources more engaging for users compared to traditional 2D printed maps.
The paper has been presented with a system that is
created on the android platform targeting the students studying in
an engineering institute. The application is created for effortless
day to day official work in an institute. Students will be served
with the benefits like compilation of branch wise question papers,
general aptitude questions, video lectures, newspapers, with
some interesting features like parent-teacher portal, feedback
system and ask your queries block. The application is designed
using core java coding, layout is fabricated with xml extension,
complete creation is done on android studio. Login authentication
is developed via Firebase Auth, newspapers have been linked
through their URL, video lectures are engrossed with API and
a separate website is created for parent-teacher interaction.
Based on the above mentioned ideology, We are fabricating an
application using android design studio kit which is majorly
concerned for the effortless access of all the essentials required
in an institute.
This document outlines the design of a single page application (SPA) called "Around the World: An Earthly Trivia" that is intended to be an educational geography and trivia game. It includes sections on problem definition, design specifications, top-down design diagrams, algorithm design, implementation, testing, and evaluation. The design specifies the game flow and user interactions, including a main menu, continent and difficulty selection, quiz questions with multiple choice answers, scoring, and hangman-style incorrect answer tracking. Algorithms are provided for key functions like selecting continents and difficulties, loading quizzes, checking answers, and updating scores. The goal is to create a fully functional SPA using JavaScript that both entertains and educ
This document provides an overview of Google Glass. It discusses how Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. The glasses will run on Android and allow hands-free access to information by communicating with the internet via voice commands. Key features will include a camera, GPS, motion sensors, and the ability to pull in augmented reality information from Google services to be displayed on the lenses. While the glasses are not meant to be worn constantly, they will function as a see-through computer monitor for accessing information as needed, similar to how smartphones are used.
The document provides an overview of a seminar report submitted by Prakhar Gupta on Google Glass. The report includes an introduction to concepts like virtual reality and augmented reality. It discusses the key technologies powering Google Glass like wearable computing, ambient intelligence and 4G. The report also covers the design and working of Google Glass and analyzes its advantages and disadvantages. It concludes with the future scope of augmented reality devices like Google Glass.
This document summarizes the creation of several geospatial visualization tools to represent trauma injury data in Cape Town, South Africa. Photographs were taken to recreate Google Street View panoramas of roads with high injury rates. The photos were stitched together using Hugin to create panoramic views. Google SketchUp was used to build 3D models of intersections based on the panoramas. Google Earth was used to create bar graphs of injury data for different suburbs. Balloons in Google Earth displayed the graphs and additional information about each location.
This document provides an overview of using Google Apps to enhance learning environments. It discusses how Google's mission is to organize information and make it accessible. It then explains the differences between Google accounts, Gmail accounts, and Google Apps accounts. The document encourages evaluating Chrome web apps to write reviews and sharing recommendations. It also lists several Google tools and resources for education, including Google Drive, Google+, YouTube Edu, and Google Sites. Overall, the document aims to showcase how Google Apps can be used to connect, collaborate, and empower student learning.
Increasing the Detail and Realism in Web3D Distributed WorldTELKOMNIKA JOURNAL
A complex and detailed Web3D world which represented the physical form of an institution is very difficult to be built. To simplify the work, raster images taken from the real structure were heavily utilized. However, this method has resulted in Web3D sites which were low on detail and having minimum level of realism. To overcome this deficiency, it is proposed to maximize the use of polygons. Experiment was done by re-developing the sample world with minimum use of raster images and applying polygons to 92% parts of the site. Site elements were also distributed to three servers to cope with bottleneck problem often occured when using only one server. The result was evaluated in a series of tests to see its viewing capabilities when displayed inside the web browser against various conditions, and it also evaluated in an acceptance test carried out by site users. The majority of testers felt immensely familiar with the details shown by the model as they were able to grab a more close-to-realistic experience like a real-world walk around inside the actual building complex. Problems that often occur when using only one server can also be reduced by using distributed world method.
The document describes a student capstone project that analyzes traffic data through collecting tweets, traffic camera images, and weather data to provide users with a traffic analysis for Cincinnati. Various techniques are used like sentiment analysis of tweets, image processing of traffic cameras to count cars, and analyzing weather patterns. A web application was created to display the current traffic analysis on a scale and provide additional context to help users understand the reported traffic status.
This document appears to be a project report for developing a travel planner application. It includes sections on introduction, literature review, problem statement, proposed solution, experimental setup and results analysis, conclusion, and future scope. The introduction provides an overview of the purpose of developing a smart travel planner app that integrates various travel related services to facilitate planning. The literature review explores several aspects of existing travel apps including user experience, mobile trends, personalization, social features, accessibility, and emerging technologies. The problem statement discusses the issues frequent travelers face when switching between different travel apps with limited storage. The remaining sections outline the proposed solution, testing process, findings, and potential for further development.
This is the story of how a small college with a department of 4 and a zero-based budget, developed a mobile solution that is affordable and provides vital information to future and current students, faculty, and staff.
An ar core based augmented reality campus navigation systemYangChang12
The document describes the design of an ARCore-based augmented reality campus navigation system. Key points:
1. The system uses ARCore to superimpose 3D virtual information onto the real environment, enhancing the user experience of campus navigation.
2. A visual inertial odometry algorithm is used for real-time indoor and outdoor localization on mobile devices.
3. The system is built with Unity3D for rich user interactions and to provide augmented reality content like 3D models, audio, and video during navigation.
Similar to Web Scraper Utilizes Google Street View Images to Power a University Tour (20)
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Energy-constrained Wireless Sensor Networks (WSNs) have garnered significant research interest in
recent years. Multiple-Input Multiple-Output (MIMO), or Cooperative MIMO, represents a specialized
application of MIMO technology within WSNs. This approach operates effectively, especially in
challenging and resource-constrained environments. By facilitating collaboration among sensor nodes,
Cooperative MIMO enhances reliability, coverage, and energy efficiency in WSN deployments.
Consequently, MIMO finds application in diverse WSN scenarios, spanning environmental monitoring,
industrial automation, and healthcare applications.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication. IJCSIT publishes original research papers and review papers, as well as auxiliary material such as: research papers, case studies, technical reports etc.
With growing, Car parking increases with the number of car users. With the increased use of smartphones
and their applications, users prefer mobile phone-based solutions. This paper proposes the Smart Parking
Management System (SPMS) that depends on Arduino parts, Android applications, and based on IoT. This
gave the client the ability to check available parking spaces and reserve a parking spot. IR sensors are
utilized to know if a car park space is allowed. Its area data are transmitted using the WI-FI module to the
server and are recovered by the mobile application which offers many options attractively and with no cost
to users and lets the user check reservation details. With IoT technology, the smart parking system can be
connected wirelessly to easily track available locations.
Welcome to AIRCC's International Journal of Computer Science and Information Technology (IJCSIT), your gateway to the latest advancements in the dynamic fields of Computer Science and Information Systems.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This research aims to further understanding in the field of continuous authentication using behavioural
biometrics. We are contributing a novel dataset that encompasses the gesture data of 15 users playing
Minecraft with a Samsung Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest Neighbors (KNN), and
Support Vector Classifier (SVC), to determine the authenticity of specific user actions. Our most robust
model was SVC, which achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed to make it viable option
for authentication systems. You can access our dataset at the following
link:https://github.com/AuthenTech2023/authentech-repo
This paper discusses the capabilities and limitations of GPT-3 (0), a state-of-the-art language model, in the
context of text understanding. We begin by describing the architecture and training process of GPT-3, and
provide an overview of its impressive performance across a wide range of natural language processing
tasks, such as language translation, question-answering, and text completion. Throughout this research
project, a summarizing tool was also created to help us retrieve content from any types of document,
specifically IELTS (0) Reading Test data in this project. We also aimed to improve the accuracy of the
summarizing, as well as question-answering capabilities of GPT-3 (0) via long text
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification.
This work highlights transfer learning’s effectiveness in image classification using CNNs and VGG 16 that
provides insights into the selection of pre-trained models and hyper parameters for optimal performance.
We have proposed a comprehensive approach for image segmentation and classification, incorporating preprocessing techniques, the K-means algorithm for segmentation, and employing deep learning models such
as CNN and VGG 16 for classification.
- The document presents 6 different models for defining foot size in Tunisia: 2 statistical models, 2 neural network models using unsupervised learning, and 2 models combining neural networks and fuzzy logic.
- The statistical models (SM and SHM) are based on applying statistical equations to morphological foot data.
- The neural network models (MSK and MHSK) use self-organizing Kohonen maps to cluster foot data and model full and half sizes.
- The fuzzy neural network models (MSFK and MHSFK) incorporate fuzzy logic into the neural network learning process to better account for uncertainty in foot sizes.
The security of Electric Vehicle (EV) charging has gained momentum after the increase in the EV adoption
in the past few years. Mobile applications have been integrated into EV charging systems that mainly use a
cloud-based platform to host their services and data. Like many complex systems, cloud systems are
susceptible to cyberattacks if proper measures are not taken by the organization to secure them. In this
paper, we explore the security of key components in the EV charging infrastructure, including the mobile
application and its cloud service. We conducted an experiment that initiated a Man in the Middle attack
between an EV app and its cloud services. Our results showed that it is possible to launch attacks against
the connected infrastructure by taking advantage of vulnerabilities that may have substantial economic and
operational ramifications on the EV charging ecosystem. We conclude by providing mitigation suggestions
and future research directions.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Web Scraper Utilizes Google Street View Images to Power a University Tour
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
DOI: 10.5121/ijcsit.2021.13402 9
WEB SCRAPER UTILIZES GOOGLE STREET VIEW
IMAGES TO POWER A UNIVERSITY TOUR
Peiyuan Sun1
and Yu Sun2
1
Webb School of California, Claremont, CA 91711
2
California State Polytechnic University, Pomona, CA, 91768
ABSTRACT
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students
have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a
product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered
from a web scraper allowing students to see what college campuses are like even when tours are
unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub
server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping
repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by
implementing Python and Javascript functions that specifically target such challenges. Finally, after
experimenting with all the functions of the web scraper and website, we confirmed that it works as expected
and can scrape and deliver tours of any university campus or public buildings we want.
KEYWORDS
Web scraping, virtual tour, cloud computing
1. INTRODUCTION
Due to the Covid-19 pandemic, in-person campus tours are no longer possible. This means that a
lot of seniors in high school will miss the opportunity to see their dream schools’ campuses,
which is significant in helping them form an idea of what college is like. Currently, there lacks a
persuasive virtual tour platform on the internet that can deliver to students and families a
comprehensive overview of what school campuses really look like. In recent years, more than
two million American high school seniors that graduate each year enroll in a college or
university, which is about 66% of each class. Although it is hard to predict the percentage of
these two million students that tried to get a virtual tour, it is a reasonable deduction that many of
them wanted at least some guide to know what kind of school they are applying to. In the future,
the number will only rise, as both the total number of high school students and high school
graduation rates are projected to increase. We predict that even after the pandemic, when colleges
and universities reopen their campuses to the public, there will still be a need for virtual tours, as
not every applying senior will be able to take a tour at every school that they are interested in.
This leaves a strong demand for a platform hosting virtual tours of universities. The same applies
to other public organizations or buildings. Not only do schools need virtual tours, public
buildings as big as sports stadiums and airports, or as small as restaurants and stores, might also
find themselves one day in need of developing a virtual tour to increase publicity or improve
customer experience.
Some existing methods have already been used to address this problem. Some universities and
public places have had outside companies develop virtual tours in the form of 360-degree images,
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
10
which allows the user to switch between images and navigate around the place [11, 12, 13].
However, this requires people to go around the place to take pictures, which is inefficient and
often takes a lot of time and effort to complete. Also, it is hard to get permissions for such tours,
especially during the pandemic, when schools are extra cautious about the potential spread of
virus. Therefore, if images were not made prior to 2020, tours using this method would be
extremely hard to develop during the pandemic.
In fact, immersive tours in the form of 360-degree pictures or, if possible, virtual reality, will
produce the best results because they will let people have better long-term memory results [14].
One important goal of virtual tours is to make people remember the university, so immersive
tours are better than 2D pictures at letting people experience the school visually; therefore, they
are more preferable. However, because the technology of virtual reality is still not mature, there
are obstacles in solely using virtual reality for campus tours [15]. For instance, a major challenge
is that virtual reality is not yet popular enough so that everyone has a pair of goggles, which are
often expensive. In addition, people likely will not buy a pair of goggles just to go on a school
virtual tour. As a result, because 2D pictures cannot provide an immersive experience, and virtual
reality has not been popularized enough for everyone, using 360-degree images seems to be the
best approach. It is also better to develop a website than a phone application, because computer
screens are larger and can give a better view, and people don’t have to download an application
in order to access the virtual tour.
In this paper, we follow the same line of research to provide immersive experiences in the form
of 360 images. Our goal is to create a service that can generate simple tours within a short period
of time and do not require people to travel to the campus during the pandemic. Our method is
inspired by web scraping and Google Street View [1, 2]. Google Street View is a function
developed by the Google Earth [3, 4] team to provide users with the opportunity to explore the
world in the form of 360-degree images, which can be taken with special panoramic cameras able
to capture the surrounding environment; an example of a panoramic camera would be Ricoh
Theta [9,10]. Because there are many users of Google Street View all around the world [22], and
many researchers also use it for environmental protection and social development [23, 24],
Google’s team constantly spend a lot of effort ensuring a clean and quality environment of the
street view platform; the quality of pictures on Google Street View, as a result, serves as a
baseline for our service as well. First, we developed a Python scraping script that finds 60
locations near a given university and an available 360-degree picture around each location [5, 6].
Web scraper is a popular method that gathers information from the internet for analytical or
personal use. Second, after deleting the repeated pictures, we output the scraped information into
a json file. Finally, we developed a website that pulls the information from the json file and
shows the tour. Google has many users; some of them own 360 cameras and have taken and
uploaded the pictures of a school campus before. This effectively solves the problem of needing
people to go to the school and take pictures, as we can take advantage of those that are already
available on the internet through a web scraper.
In two application scenarios, we not only demonstrated the effectiveness of our scraping script,
but also proved the usefulness of our website in an actual environment. First, we generated a list
of the top 100 schools in the United States, as well as every registered university or college in
California. We then ran our python scraping script and waited for it to iterate through every
school on the list. When each school was completed, the terminal would print the name of the
school and its index in the list (the first school is 1, the second is 2, etc.) After the scraping part
was done, two processing functions went through the results, checking for repeated pictures and
changing the descriptions. We successfully acquired a file named “data.json” that contains all the
scraped information. The scraping script worked just the way we expected it to. Secondly, we
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
11
copied the data.json file and put in the same directory as our website html file [7]. We pushed the
information to GitHub, our website’s server [8], and after a while, the data showed up on our
domain “https://www.virtourgo.com.” After we typed in the name of a random school in our list
of schools and clicked the button to start the virtual tour, we were led to separate tour page,
where we could not only see the street view panorama up and working, but also a places list
dropdown menu that shows all the available buildings at the school, two descriptions that tell
people the name of each building and the next one, and a button to go to the next building on the
list. The website also worked just as we intended.
The rest of this paper is organized thus: Section 2 provides insight on the challenges we
experienced during the development process of the web scraper and website; Section 3 focuses
on the details and gives a guide for how we designed the different components, as well as how we
solved the problems mentioned in Section 2; Section 4 gives an overall evaluation of our final
product; Section 5 presents the related works done in this field. Finally, Section 6 allows for
concluding remarks and future possibilities for the project.
2. CHALLENGES
In order to build a university virtual tour website that uses Google Street View images gathered
from a web scraper, a few challenges have been identified as follows.
2.1. Challenge 1: The scraping script does not tell what the next place is
The scraping script generates a json file, which consists of two descriptions for each location.
Description1 is a sentence that states the name of the current location, while description2 is a
sentence that either tells the viewer what the next location is, or concludes the tour if it is the last
stop. However, when the python script tries to find the available 360 pictures, it finds the
locations near the targeted school one by one, which means that it cannot get the name of the next
location and put it in “description2.” In order to fix the problem, we implemented a function
called “processing(inp)” (see Figure 1) for when the scraping part finishes running. The
processing function will iterate through every location in the data dictionary, changing the
“description2” of each location with the name of the next location before outputting the data as a
json file.
Figure 1. “processing (inp)”
2.2. Challenge 2: The Scraping Script will Always Produce Duplicate Locations
Our scraping script is designed to find available 360 pictures around several locations in each
school. However, because there might not be that many 360 pictures on Google Street View,
every school will more or less have locations with the same pictures, depending on the school and
its available pictures. If a school has abundant images (meaning that people in the past have taken
a lot of 360 pictures and uploaded them), then it will have fewer places with the same images.
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
12
Obviously, users will not want to see the same image twice for a school, so in order to avoid this
problem we implemented another processing function called “check_repeat(inp)” (see Figure 2).
The function will iterate through every location with every school scraped, truncate the latitude
and longitude to four decimal places (this is because although many locations show different
latitude and longitudes in the later decimal places, they still represent the same image), and delete
the location if the image already appears before the data gets outputted as a json file. Meanwhile,
the function will also print out the name and the number of locations deleted in the terminal.
Although this will not eradicate the problem entirely since sometimes the latitude and longitude
will still show the same image, it will effectively solve the problem in 80% of the instances.
Figure 2. “check_repeat (inp)”
2.3. Challenge 3: The “List Of Places” Menu Cannot Jump to the Correct Location
In the touring page, each school will have its own “List of Places” menu in the navigation bar that
shows a list of all the places available on the site. This is intended for users to see all the locations
available and jump to the one they want. However, since we did not make a new html page for
every location, it is difficult to let them jump to the correct location of the correct school,
especially when there are a lot of locations stored in the database. Therefore, we developed a
JavaScript function, “goToNextPlaceWithIndex(gotoIndex)” (see Figure 3). This function will
take the index of the location that appears on the List of Places menu, find the data stored with
the index, and start a street view panorama with that location.
Figure 3. “goToNextPlaceWithIndex(gotoIndex)”
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
13
3. SOLUTION
Our finished product – Virtourgo – is a website that displays virtual tours for well-known
universities in the world. It implements a web scraper to gather information from the internet and
compile it into a virtual tour. The scraping process is done in the background, so individual users
will not need to see or change anything to access the tour. The scraper uses Selenium in Python to
search 1) buildings on the campus of the desired school, 2) available Google Street View pictures
around each building, and 3) each picture’s latitude and longitude. The information scraped is
then re-evaluated by another script, which goes through each location to make sure they are not
repeated. To conclude the scraping process, we stored the information scraped in a json file
within a GitHub repository. Like any other website, users can access the Virtourgo by entering
the URL, which in this case is virtourgo.com, in a web browser. The DNS Server we chose is
Google Domains, and the server is GitHub. The users will automatically be directed to a page
named “index.html,” the main page of the website. On the homepage, there is a search box with
auto-correct function. Users will enter the name of their desired school and click the “Start
Virtual Tour” button to go to the tour page that shows the scraped locations one at a time. On the
tour page, most of the screen will be the Google Street View images embedded on the page, and
below those will be some descriptions about the place, as well as what the next stop will be.
When the tour is concluded, users will be prompted to return to the home page, where they can
feel free to start another tour. In sum, there are three main components of our system (see boxes
in Figure 4):
● a Python Web Scraper that gathers information about all the locations
● a GitHub repository that acts as the server, hosting the website and storing the scraped
information
● a DNS Server by Google Domains that processes the request
The following sections will describe the implementation process for each of the three
components, as well as the website itself, in detail.
Figure 4. Three main components of our system
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
14
Web Scraper
For the Web Scraper part, our goal is to find the latitude and longitude of places of interest at and
around the given school. To ensure efficiency, we would also like to scrape many schools at
once, given a list of all the schools we want. First, we need to make sure that the name of the
school we have is the same one that appears on Google Maps. We can do this by finding the
place with the closest name to the one we inputted. This is to prevent unexpected writing errors
when typing in or copying the name of the school. Then, we can start searching for the actual
locations around the place we want. For each place, we will find the closest 60 locations around
it; for each location, we will gather the name of the place and the latitude and longitude of the
closest available image. The code to find places around each school is shown in Figure 5. To be
able to search many schools at a time, we implemented another function, which simply iterates
through a list of different schools to find the information above for each one. This way we don’t
have to run the code for each school. Finally, we will output the data we scraped into a json file,
which we will use in the actual website. The json file, which we name as “data.json,” is designed
specifically for the website and has all of the same variable names. This allows us to simply copy
the “data.json” file from the Python scraper directory and directly paste it in the website
repository without needing to change the its formality.
Figure 5. Code to find places around each school
Of course, because not every location has its own 360 picture on Google Street View available,
many locations will show the same picture, so we have to use another function to rule out
locations with the same images. Because we chose to find the locations first, and then find 360
pictures according to the locations, duplicate pictures are unpreventable. An alternative scraping
method would be to find the pictures first, then find the name of the building closest to the
picture. This would eradicate duplicate pictures. However, Google Street View currently does not
support searching surrounding buildings’ names with a 360 picture. It only functions the other
way around, that is, to search the closest 360 pictures of a building. Therefore, we have to deal
with the duplicate pictures elsewhere with another function.
Google Domains DNS Server
The DNS Server used for the website is Google Domains, which hosts our domain. First, we
entered the webpage to build a domain through “https://domains.google.com/registrar/search.”
Then, we entered the name that we wanted the domain to be: “virtourgo.” Next, we chose the
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
15
ending of my domain, which is “.com” in our case. Finally, we needed to pay an annual fee of
$12 for Google to host our domain. After we paid for the domain and logged in to the google
account that bought it, we navigated to “My Domains,” and selected the domain we just
registered, and clicked on “DNS.” There are a couple of things we need to put in before it starts
working (see Figure 6).
Figure 6. DNS records
In the box above, we entered “185.199.108.153” into the “IPv4 address” section, and then clicked
the + button to the right of it. Next, we entered the rest three of the IPv4 addresses,
“185.199.109.153,” “185.199.110.153,” and “185.199.111.153.” These are the IP addresses that
belong to GitHub. Then we clicked “Add” after finishing. Finally, in the dropdown menu that
says “A,” we clicked to find a button called “CNAME,” and in the “Domain name” section, we
put in the domain we just registered, “virtourgo.com.”
GitHub Server
All the html programs of the website, as well as the scraped information, are stored in a GitHub
repository, which also serves as the server of the website. To do this, we first created a GitHub
repository, and pushed the html and json program files on the repository. Then, we went to the
“settings” page of the repository, which can be found at the top of the repository menu, and found
a section called “GitHub Pages” (see Figure 7). In this section, we entered the domain registered
before in the “custom domain” part and clicked “save.” In fact, users can host their websites on
GitHub without owning their own domains, so if you don’t mind the domain showing your
username and “github.com,” you could skip the “Google Domains and DNS Server” part. But
because we want our website to show our custom domain, we needed to add this in the GitHub
repository settings. Finally, we checked the box that says “enforce HTTPS,” which stands for
“Hypertext Transfer Protocol Secure.” This gave our website secure communication by
encrypting the data requests automatically.
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
16
Figure 7. “GitHub Pages”
Website Development
There are three main technical parts of the website development part. The first one is the search
box, the second one is the function to find the desired school, and the third one is the embedding
of the Google Maps Street View window into our website. For the search box, we want to add an
auto-complete function for it, so users will not need to enter the precise name that appears on our
website. We implemented the autocomplete function using Jquery. We created a “#schoolInput”
tag (see Figure 8) and used the tag when defining our search box by saying (id=“schoolInput”).
The source “schools” is a var that we created with all the names of our school list, and the delay
is the time in milliseconds the search box will wait before giving its suggestions. For instance, in
our case, users will see the search box automatically suggesting the name of the school according
to what the user has already typed in after 600 milliseconds of idle time.
Figure 8. Autocomplete function using Jquery
For our search box to be able to jump to the correct school, we created another function called
“#startVirtualTour” (see Figure 9). In this function, we retrieve and call the name of the school
from the previous function “#schoolInput,” and find the index of that school’s name. Then, we
navigate to the index of that specific school with another function “goToNewPage,” which leads
us to the actual tour page.
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
17
Figure 9. “#startVirtualTour”
For the Google Maps Street View function, there are a couple of stats we have to enter in order
for the street view panorama to initiate (see Figure 10): the position, which includes the latitude
and longitude of the location, and the pov, which includes the heading and the pitch. The latitude
and longitude determine the exact position the place is located on the map, so that Google Street
View can find it in its system. The heading signifies the position the users are looking at (e.g.,
whether north or south), and the pitch determines the angle of the pov (whether looking straight
ahead or pointing at the sky). We then accessed the information scraped in the previous section in
a function by id “pano,” and entered those numbers. Notably, besides linking the JavaScript file
in our html file as usual, we also inserted the following script with our google api in the html file
(we replaced our actual API key with “your_api” for security reasons):
<script async defer
src="https://maps.googleapis.com/maps/api/js?key=your_api&callback=initialize"></script>
Figure 10. Stats to enter for street view panorama
4. EXPERIMENT
To evaluate the major components described above, we designed three experiments to see if they
work as expected. The first experiment aimed to test the autocomplete search box and the jump to
school function. On the website, we typed in the first few letters of some school we scraped. If
our targeted school appeared in the autocomplete list, we then clicked the “Start Virtual Tour”
button to see if we are navigated to the correct school. We picked 20 schools randomly from the
list, and if all the schools passed the aforementioned test, we considered the whole system to
fulfill our expectations.
10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
18
Figure 11. Typing fist three letters, “Har…”
We tried the test with 20 schools randomly selected from the school list, one of which is Harvard
University. We typed in the first three letters of the name “Harvard” (see Figure 11), and the
autocomplete function successfully provides us with all the schools whose names include “har.”
In our database now, there are three schools with “har” in its name, “Harvard University,”
“Charles R. Drew University of Medicine and Science,” and “Harvey Mudd College.” After we
clicked and selected our targeted school Harvard University, we clicked the “Start Virtual Tour”
button at the right of the search box. The page then changed to the main tour page (see Figure
12), which shows the first stop of Harvard University.
Figure 12. The main tour page for Harvard, which shows the first stop of Harvard University
We repeated the process for another 19 randomly selected schools and all got the results we
expected - the autocomplete function showed us the school’s name, and the start tour button led
us to the tour page with the correct school’s images.
11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
19
Experiment 2
The second experiment was designed to test the scraping function. We randomly generated 20
schools and put them in a list named “schools.” Then we called for the scraping function and the
processing function (see Figure 13).
Figure 13. Scraping and processing function
The processing function’s purpose is to fix the problem with the school’s names. During the
scraping process, we could not get the name of the next location, so we cannot tell users where
the next stop is. Therefore, the processing function aims to go into every location and change the
part where it says what the next stop is, which is in a key called “description2.” After the
scraping is done, we then uploaded it into a json file called “data.json.”
After the scraping is done (after each school’s scraping process is completed, the terminal will
print the index and the name of the school), we can open the output file “data.json” and see the
results (see Figure 14).
Figure 14. Open output file “data.json”
The file is constructed with many dictionaries in json format. In the main dictionary, there is a
timestamp, which represents the time the scraping action was performed, as well as a list called
12. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
20
“data.” This list is compiled of 20 dictionaries that each point to their respective school. In each
school dictionary, there includes a data point for the school’s name, the school’s address, the
school’s location in terms of latitude and longitude, and a list of buildings. In the list of buildings,
there are 60 different dictionaries that each contain the information for a particular building. The
information includes: the name of the building marked as “title,” a sentence describing the name
of this building called “description1,” a sentence that tells the users where the next stop is stored
in “description2,” the latitude and longitude of the 360 picture, and the heading and pitch (also
called “pov” in the html file of the website) of the location.
Experiment 3
Besides the processing function, we also wanted to test the function we designed to prevent
duplicate locations. The third experiment is designed to test the effectiveness of the
“check_repeat(inp)” function mentioned in the “Challenges” section.
First, we ran the scraping script of the “schools” list mentioned above, which consists of 20
randomly generated schools, without activating the “check_repeat(inp)” function; we named that
json file “unprocessed.” Then, we ran the scraping script, but this time with the function, and
named its corresponding file “completed.” Next, we went through each school in the school list
of both the “unprocessed” and “completed” file with our website and recorded the number of
duplicate locations in each file. Finally, we compiled the number of duplicate locations of each
file into an excel table (see Figure 15).
Figure 15. Number of duplicate locations in each file
In the “unprocessed” json file, each school had an average of 60 total locations, which
corresponds with our expectations, because we scraped 60 locations for each school. In these 60
locations, 31 of them are duplicate results. In contrast, in the “completed” file, each school only
had an average of 38 locations and 9 duplicate ones, showing a 71% decrease in the number of
duplicate locations. Our check function successfully ruled out an average of 22 duplicate
locations. Furthermore, in the “unprocessed” file, the rate at which duplicate locations occur is
51.6%, while in the “completed” file, the rate is only 23.7%. This shows a 54.1% decrease in the
average duplicate rate of schools in the “completed” file. Although 23.7% of duplicate locations
will still more or less affect users’ experience, it is already much better than 51.6%. In the future,
we hope to develop more advanced functions to check for repeated 360 pictures, and we hope to
lower the duplicate rate to less than only 5%.
The experiment results show that the website brings up the correct school and the scraping script
overall met our expectations.
5. RELATED WORK
Andri, C., et al. created a virtual tour for Management and Science University (MSU) with
augmented reality and virtual reality in 2019 and recorded that the majority of the users were
happy with the product [16]. Their work is more detailed and entails more time with each specific
13. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
21
information. Our work is a more general script that can cover the tours of many schools at the
same time. Perdana, D., et al. organized a virtual tour with 360-degree pictures using 3DVista for
3 buildings of Telkom University (Tel-U) in Indonesia [17]. Their 360 pictures were made using
stitched-together 2D images, while the source of our 360-degree pictures were scraped from
Google Street View. Similar to the augmented 360-degree panoramic safety training done by
Eiris, R. et al., their works are more interactive than ours, but require more precise planning of
the tour route [18]. Li, X. et al. developed a program to use 360-degree panoramic pictures from
Google Street View to map urban landscapes and quantify environmental features [19]. Both of
our works use Google Street View as the source of 360 pictures, but instead of analyzing them,
our project tries to develop a tour with the pictures, while they try to analyze the city.
Thennakoon, M. et al. advanced a tour mobile application, using web scraping to compile
information from Wikipedia for tourists [20]. Both our applications are web-based products, but
their work requests information from Wikipedia, while ours is from Google Street View.
Widiyaningtyas, T. et al. also aimed at creating a virtual tour for school campus with ORB image
stitching [21]. Their work is made possible by image stitching, which involves taking several 2D
pictures and combining them into one panoramic image, while ours does not require us to take
pictures ourselves.
6. CONCLUSION AND FUTURE WORK
In this project, we successfully developed a python web scraper that scrapes images from Google
Street View to compile a tour for a given list of universities. We also created and launched a
website that makes use of the data scraped from a json file and can show the school the user
wants based on a search using a search box.
The entire project can be split up into 3 or 4 parts: the scraping script, the GitHub server, the
Google Domains DNS Server, and the HTML website program. We discussed the
implementation process of all of them in Section 3. In section 4, we did a series of experiments
and proved that each part can function and the website can work as planned and deliver a user’s
tour smoothly. The experiments also showed that we have solved the challenges effectively.
During the Covid-19 pandemic, many students are locked in their homes, unable to attend an
actual university tour. The purpose of this project was to provide a solution to this problem by
developing a virtual tour using 360-degree images. Through the experiments, we have proven
that users can achieve this goal effectively on our website, and that the scraping script can create
tours for many schools efficiently. In fact, the tour can not only be used for universities, but for
other public buildings or gathering places. It is even possible to scrape other information, such as
detailed descriptions, people’s comments, or 2D images of each school.
Notably, the product of our research can be applied in many scenarios other than university tours.
Because our web scraper has the ability to scrape 360 pictures of universities, it can also be used
in other places such as football stadiums, public amenities, as well as other landmarks of the city,
where there are a fair amount of pictures available on Google Street View, and users would like
to know the surrounding environment of these places. In such case, our web scraper can be used
directly for searching these new places with no change needed to the scraping script; we can also
make a new website in a short amount of time, as we only have to change the homepage of the
website while keeping the functions that power the main tour page.
However, there are still limitations to the system. For instance, although the function that checks
repeats works in the majority of cases, it is still unable to detect every repeated location with just
latitude and longitude data. This requires human effort to physically go through the scraping
script and delete repeated locations. Also, the quality of the 360-degree images cannot be
14. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
22
ensured. Everybody can upload 360 pictures to the internet, and since Google Street View has
been around for more than 10 years, some pictures were taken many years ago and therefore have
lower image resolution and quality. Lastly, because the website uses Google Domains to power
its functions, users who cannot access Google services, such as those in China, will not be able to
view the website unless with the help of other proxy applications.
In the future, we can try to find another way to analyze and delete the repeated images, and also
find ways to analyze the resolution of the images in order to maximize the tour quality.
Improving the look of the website UI can also give users a better experience with the tour.
REFERENCES
[1] Anguelov, Dragomir, et al. "Google street view: Capturing the world at street level." Computer 43.6
(2010): 32-38.
[2] Rundle, Andrew G., et al. "Using Google Street View to audit neighborhood
environments." American journal of preventive medicine 40.1 (2011): 94-100.
[3] Gorelick, Noel, et al. "Google Earth Engine: Planetary-scale geospatial analysis for
everyone." Remote sensing of Environment 202 (2017): 18-27.
[4] Patterson, Todd C. "Google Earth as a (not just) geography education tool." Journal of
Geography 106.4 (2007): 145-152.
[5] Mitchell, Ryan. Web scraping with Python: Collecting more data from the modern web. " O'Reilly
Media, Inc.", 2018.
[6] Lawson, Richard. Web scraping with Python. Packt Publishing Ltd, 2015.
[7] Marrs, Tom. JSON at work: practical data integration for the web. " O'Reilly Media, Inc.", 2017.
[8] Dabbish, Laura, et al. "Social coding in GitHub: transparency and collaboration in an open software
repository." Proceedings of the ACM 2012 conference on computer supported cooperative work.
2012.
[9] Aghayari, S., et al. "Geometric calibration of full spherical panoramic Ricoh-Theta camera." ISPRS
Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1/W1 (2017) 4
(2017): 237-245.
[10] Kavanagh, Sam, et al. "Creating 360 educational video: A case study." Proceedings of the 28th
Australian conference on computer-human interaction. 2016.
[11] Napolitano, Rebecca K., George Scherer, and Branko Glisic. "Virtual tours and informational
modeling for conservation of cultural heritage sites." Journal of Cultural Heritage 29 (2018): 123-
129.
[12] Ashmore, Beth, and Jill E. Grogg. "Library virtual tours: A case study." Research Strategies 20.1-2
(2004): 77-88.
[13] Kabassi, Katerina, et al. "Evaluating museum virtual tours: the case study of Italy." Information 10.11
(2019): 351.
[14] Parsons, Thomas D., and Albert A. Rizzo. "Initial validation of a virtual environment for assessment
of memory functioning: virtual reality cognitive performance assessment test." CyberPsychology &
Behavior 11.1 (2008): 17-25.
[15] Burdea, Grigore, and Philippe Coiffet. "Virtual reality technology." (2003): 663-664.
[16] Andri, Chairil, Mohammed Hazim Alkawaz, and A. Bibo Sallow. "Adoption of mobile augmented
reality as a campus tour application." Int. J. Eng. Technol. 7 (2018): 64-69.
[17] Perdana, Doan, Arif Indra Irawan, and Rendy Munadi. "Implementation of a web based campus
virtual tour for introducing Telkom university building." International Journal if Simulation—
Systems, Science & Technology 20.1 (2019): 1-6.
[18] Eiris, Ricardo, Masoud Gheisari, and Behzad Esmaeili. "PARS: Using augmented 360-degree
panoramas of reality for construction safety training." International journal of environmental research
and public health 15.11 (2018): 2452.
[19] Li, Xiaojiang, Carlo Ratti, and Ian Seiferling. "Mapping urban landscapes along streets using google
street view." International cartographic conference. Springer, Cham, 2017.
15. International Journal of Computer Science & Information Technology (IJCSIT) Vol 13, No 4, August 2021
23
[20] Thennakoon, M. S. B. W. T. M. P. S. B., et al. "TOURGURU: tour guide mobile application for
tourists." 2019 International Conference on Advancements in Computing (ICAC). IEEE, 2019.
[21] T. Widiyaningtyas, D. D. Prasetya and A. P. Wibawa, "Web-based Campus Virtual Tour Application
using ORB Image Stitching," 2018 5th International Conference on Electrical Engineering, Computer
Science and Informatics (EECSI), 2018, pp. 46-49, doi: 10.1109/EECSI.2018.8752709.
[22] Fry, D., Mooney, S.J., Rodríguez, D.A. et al. Assessing Google Street View Image Availability in
Latin American Cities. J Urban Health 97, 552–560 (2020).
[23] Keralis, J.M., Javanmardi, M., Khanna, S. et al. Health and the built environment in United States
cities: measuring associations using Google Street View-derived indicators of the built
environment. BMC Public Health 20, 215 (2020).
[24] Li X. Examining the spatial distribution and temporal change of the green view index in New York
City using Google Street View images and deep learning. Environment and Planning B: Urban
Analytics and City Science. October 2020. doi:10.1177/2399808320962511