This document discusses navigation meshes (NavMeshes), which are a technique for pathfinding and movement in 3D virtual environments. A NavMesh is a simplified 3D geometry that represents the walkable space of a virtual world. It is composed of convex polygons connected at edges. This geometry is converted into a graph that serves as the input for pathfinding algorithms. Pathfinding algorithms like A* are run on the graph to determine a path between locations. The resulting path is then used to control an agent's movement through the virtual environment by interpolating its position between nodes representing polygons in the NavMesh.
Government has huge amounts of information but how can this be effectively managed and delivered through the web? This session will ‘lift the lid’ on web mapping technology and identify some of the key issues that must be addressed to achieve a successful outcome.
The NSW government SIX Viewer web mapping portal will be used as a case study to demonstrate how terabytes of data can be integrated and delivered via the Internet.
This presentation brings insights on cloud and green cloud computing and briefs the readers with its potential in india and how it can be achieved. Numerous insights have been collectively put in into this presentation.
Web Mapping 101: What Is It and Making It Work For YouSafe Software
Web mapping is the process of using the internet to visualize, analyze, and share your geospatial data through a map. Web maps are an important tool for many organizations as they provide the ability to distribute critical information to anyone, anywhere, and at any time.
Web maps provide endless potential for visualizing valuable data that may otherwise go unused. But, not everyone knows how to get started with creating one. In this webinar, we’ll cover:
- An overview of web mapping and how it works
- How OpenLayers and Leaflet work with web mapping
- How to use web mapping tools, including Esri Leaflet and Mapbox with the HTMLReportGenerator
- How to create vector tilesets in FME to make web mapping easier than ever
Join our team of Support Specialists to learn how to get started using FME to create a web map of your own to visualize and share your data.
Stop wasting the value of your geospatial data by letting it sit unused. You’ll leave this webinar with the tools to get you started with creating a web map of your own so you can present your data in a way that's easy to understand and share with others.
Government has huge amounts of information but how can this be effectively managed and delivered through the web? This session will ‘lift the lid’ on web mapping technology and identify some of the key issues that must be addressed to achieve a successful outcome.
The NSW government SIX Viewer web mapping portal will be used as a case study to demonstrate how terabytes of data can be integrated and delivered via the Internet.
This presentation brings insights on cloud and green cloud computing and briefs the readers with its potential in india and how it can be achieved. Numerous insights have been collectively put in into this presentation.
Web Mapping 101: What Is It and Making It Work For YouSafe Software
Web mapping is the process of using the internet to visualize, analyze, and share your geospatial data through a map. Web maps are an important tool for many organizations as they provide the ability to distribute critical information to anyone, anywhere, and at any time.
Web maps provide endless potential for visualizing valuable data that may otherwise go unused. But, not everyone knows how to get started with creating one. In this webinar, we’ll cover:
- An overview of web mapping and how it works
- How OpenLayers and Leaflet work with web mapping
- How to use web mapping tools, including Esri Leaflet and Mapbox with the HTMLReportGenerator
- How to create vector tilesets in FME to make web mapping easier than ever
Join our team of Support Specialists to learn how to get started using FME to create a web map of your own to visualize and share your data.
Stop wasting the value of your geospatial data by letting it sit unused. You’ll leave this webinar with the tools to get you started with creating a web map of your own so you can present your data in a way that's easy to understand and share with others.
Spark Summit EU 2015: Lessons from 300+ production usersDatabricks
At Databricks, we have a unique view into over a hundred different companies trying out Spark for development and production use-cases, from their support tickets and forum posts. Having seen so many different workflows and applications, some discernible patterns emerge when looking at common performance and scalability issues that our users run into. This talk will discuss some of these common common issues from an engineering and operations perspective, describing solutions and clarifying misconceptions.
Top 10 Techniques For React Performance Optimization in 2022.pptxBOSC Tech Labs
In this article, you will learn how to improve react performance. You will see some best techniques to follow to enhance the react performance or optimize the react performance.
NoSQL, as many of you may already know, is basically a database used to manage huge sets of unstructured data, where in the data is not stored in tabular relations like relational databases. Most of the currently existing Relational Databases have failed in solving some of the complex modern problems like:
• Continuously changing nature of data - structured, semi-structured, unstructured and polymorphic data.
• Applications now serve millions of users in different geo-locations, in different timezones and have to be up and running all the time, with data integrity maintained
• Applications are becoming more distributed with many moving towards cloud computing.
NoSQL plays a vital role in an enterprise application which needs to access and analyze a massive set of data that is being made available on multiple virtual servers (remote based) in the cloud infrastructure and mainly when the data set is not structured. Hence, the NoSQL database is designed to overcome the Performance, Scalability, Data Modelling and Distribution limitations that are seen in the Relational Databases.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
I have studied on Big Data analysis and found Hadoop is the best technology and most popular as well for it's distributed data processing approaches. I have gathered all possible information about various Hadoop distributions available in the market and tried to describe most important tools and their functionality in the Hadoop echosystems in this slide show. I have also tried to discuss about connectivity with language R interm of data analysis and visualization perspective. Hope you will be enjoying the whole!
QGIS server: the good, the not-so-good and the uglyRoss McDonald
Fiona Hemsley-Flint's presentation on QGIS Server given at the 6th Scottish QGIS UK user group meeting. Compares QGIS server with Mapserver and Geognosis.
This is a deck of slides from a recent meetup of AWS Usergroup Greece, presented by Ioannis Konstantinou from the National Technical University of Athens.
The presentation gives an overview of the Map Reduce framework and a description of its open source implementation (Hadoop). Amazon's own Elastic Map Reduce (EMR) service is also mentioned. With the growing interest on Big Data this is a good introduction to the subject.
Spark Summit EU 2015: Lessons from 300+ production usersDatabricks
At Databricks, we have a unique view into over a hundred different companies trying out Spark for development and production use-cases, from their support tickets and forum posts. Having seen so many different workflows and applications, some discernible patterns emerge when looking at common performance and scalability issues that our users run into. This talk will discuss some of these common common issues from an engineering and operations perspective, describing solutions and clarifying misconceptions.
Top 10 Techniques For React Performance Optimization in 2022.pptxBOSC Tech Labs
In this article, you will learn how to improve react performance. You will see some best techniques to follow to enhance the react performance or optimize the react performance.
NoSQL, as many of you may already know, is basically a database used to manage huge sets of unstructured data, where in the data is not stored in tabular relations like relational databases. Most of the currently existing Relational Databases have failed in solving some of the complex modern problems like:
• Continuously changing nature of data - structured, semi-structured, unstructured and polymorphic data.
• Applications now serve millions of users in different geo-locations, in different timezones and have to be up and running all the time, with data integrity maintained
• Applications are becoming more distributed with many moving towards cloud computing.
NoSQL plays a vital role in an enterprise application which needs to access and analyze a massive set of data that is being made available on multiple virtual servers (remote based) in the cloud infrastructure and mainly when the data set is not structured. Hence, the NoSQL database is designed to overcome the Performance, Scalability, Data Modelling and Distribution limitations that are seen in the Relational Databases.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
I have studied on Big Data analysis and found Hadoop is the best technology and most popular as well for it's distributed data processing approaches. I have gathered all possible information about various Hadoop distributions available in the market and tried to describe most important tools and their functionality in the Hadoop echosystems in this slide show. I have also tried to discuss about connectivity with language R interm of data analysis and visualization perspective. Hope you will be enjoying the whole!
QGIS server: the good, the not-so-good and the uglyRoss McDonald
Fiona Hemsley-Flint's presentation on QGIS Server given at the 6th Scottish QGIS UK user group meeting. Compares QGIS server with Mapserver and Geognosis.
This is a deck of slides from a recent meetup of AWS Usergroup Greece, presented by Ioannis Konstantinou from the National Technical University of Athens.
The presentation gives an overview of the Map Reduce framework and a description of its open source implementation (Hadoop). Amazon's own Elastic Map Reduce (EMR) service is also mentioned. With the growing interest on Big Data this is a good introduction to the subject.
Raster data is commonly obtained by scanning maps or collecting aerial photographs and satellite images. Scanned map datasets don't normally contain spatial reference information (either embedded in the file or as a separate file). With aerial photography and satellite imagery, sometimes the location information delivered with them is inadequate, and the data does not align properly with other data one has. Thus, to use some raster datasets in conjunction with other spatial data, we need to align or georeference them to a map coordinate system. A map coordinate system is defined using a map projection (a method by which the curved surface of the earth is portrayed on a flat surface). Georeferencing a raster data defines its location using map coordinates and assigns the coordinate system of the data frame. Georeferencing raster data allows it to be viewed, queried, and analyzed with other geographic data.
Generally, we georeference raster data using existing spatial data (target data)—such as georeferenced rasters or a vector feature class—that resides in the desired map coordinate system. The process involves identifying a series of ground control points—known x,y coordinates—that link locations on the raster dataset with locations in the spatially referenced data (target data). Control points are locations that can be accurately identified on the raster dataset and in real-world coordinates. Many different types of features can be used as identifiable locations, such as road or stream intersections, the mouth of a stream, rock outcrops, the end of a jetty of land, the corner of an established field, street corners, or the intersection of two hedgerows. The control points are used to build a polynomial transformation that will shift the raster dataset from its existing location to the spatially correct location. The connection between one control point on the raster dataset (the from point) and the corresponding control point on the aligned target data (the to point) is a link.
Finally, the georeferenced raster file can be exported for further usage.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
Standard A-A' topographic profiles are widely used in the geosciences to construct cross sections and investigate surficial processes. However, simple line profiles fail to capture the wider topographic regime. Here, I present a workflow to calculate a swath profile in GRASS GIS. The basic premise is, a swath profile "looks off to the side" along each step in a standard profile line, and calculates min/mean/max elevation, hence producing a statistically-relevant 2-d approximation of topography.
You can find more gis-based geomorphology workflows at my website: https://sites.google.com/site/sorsbysj/
- Skyler Sorsby
Trajectory Segmentation and Sampling of Moving Objects Based On Representativ...ijsrd.com
Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub) trajectories in the MOD. In order to find the most representative sub trajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness. The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness. Finally, a sampling method over the resulting segments yields the most representative sub trajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other sampling techniques.
AbstractWe design an software to find optimal(shortest) path .docxaryan532920
Abstract
We design an software to find optimal(shortest) path on the University of Minnesota map by using the A* algorithm. Using GUI (Graphical User Interface) to visualize the path on the map and print the list, which contains the transient nodes. It is definitely efficient way for students to make route planning in the campus. The approach supports that query the shortest path and manage the time in the campus.
Maps are kind of diagrammatic form that represents physical features such as buildings and roads, and maps play an essential role in the development of civilization. Most of people are familiar with using maps to find routes and locate their position on the maps. However, it is very challenge to find a shortest path in the real world. Therefore, using computers and algorithms are efficient ways to solve route planning problems. Using graph theories to take the place of real buildings and available roads with nodes and edges, then a transportation network has been established by connecting them. Meanwhile, computer software can generate a shortest path in such transportation network. The path shows the optimal solution in the corresponding transportation network, what is more, the tool we designed should be tested against real world, complex situation in order to detect any bugs existing in it.
Map is a graphic way to represent spatial concepts. A type which people most familiar with is transportation map. Map is a medium of interaction between human and environment. That is because vision of human cannot satisfied human when they make plan according to environment. An accurate map could help human to learn where they are, and make correct decision. For the record, Napoleon Bonaparte did that we owe the first systematic use of maps in the conduct of war. \cite{p2}Accurate information on map helped French Emperor to coordinate geographically expansive campaigns and discrete armies in the field. They indicate positions of enemy armies and do prediction by labelling on map. By calculating distance and highlighted important topographical features on map, predication could be much more accurate. As “a means of visualizing and managing the future,” the Napoleonic map was “the central part of an information-transformation system”\cite{p3}
Technology helped map develop more quickly. Map helps human learn a more accurate world, and also makes life of human more convenient. For example, when students join a new university, or they want to know about campus, students could use visual map software installed in smartphones to get rough idea about it. Map cannot show all detail about each building in campus but it could provide students the most accurate shapes of buildings in vertical view, distance between them and relations of positions. All these information could help students to learn their campus better. However, it is difficult for some of students to find a route between two location in a complicated campus which is with a big ar.
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
In order to enable a computer to construct and display a three-dimensional array, solid objects from a single two-dimensional photograph, the rules and assumptions of depth perception have been carefully analyzed and mechanized. It is assumed that a photograph is a perspective projection of a set of objects which can be constructed from transformations of known three-dimensional models, and that the objects are supported by other visible objects or by a ground plane. These assumptions enable a computer to obtain a reasonable, three-dimensional description from the edge information in a photograph by means of a topological, mathematical process. A computer program has been written which can process a photograph into a line drawing .transform the line drawing into a three-dimensional representation and, finally, display the three-dimensional structure with all the hidden lines removed, from any point of view. The 2-D to 3-D construction and 3-D to 2-D display processes are sufficiently general to handle most collections of planar-surfaced objects and provide a valuable starting point for future investigation of computer-aided three-dimensional systems.
Design and Implementation of Mobile Map Application for Finding Shortest Dire...Eswar Publications
The shortest path problem is an approach towards finding the shortest and quickest path or route from a starting point to a final destination, four major algorithms are peculiar to solving the shortest path problem. The algorithms include Dijkstra’s Algorithm, Floyd-Warshall Algorithm, Bellman-Ford Algorithm and Alternative Path Algorithm. This research work is focused on the design of mobile map application for finding the shortest
route from one location to another within Yaba College of Technology and its environ. The design was focused on
Dijkstra’s algorithm that source node as a first permanent node, and assign it 0 cost and check all neighbor nodes
from the previous permanent node and calculate the cumulative cost of each neighbor nodes and make them
temporary, then chooses the node with the smallest cumulative cost, and make it as a permanent node. The different nodes that lead to a particular destination were identified, the distance and time from a source to a destination is calculated using the Google map. The application then recommends the shortest and quickest route to the destination.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Exploring optimizations for dynamic PageRank algorithm based on GPU : V4Subhajit Sahu
This is my comprehensive viva report version 4.
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Graph is a generic data structure and is a superset of lists, and trees. Binary search on sorted lists can be interpreted as a balanced binary tree search. Database tables can be thought of as indexed lists, and table joins represent relations between columns. This can be modeled as graphs instead. Assignment of registers to variables (by compiler), and assignment of available channels to a radio transmitter and also graph problems. Finding shortest path between two points, and sorting web pages in order of importance are also graphs problems. Neural networks are graphs too. Interaction between messenger molecules in the body, and interaction between people on social media, also modeled as graphs.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
20240605 QFM017 Machine Intelligence Reading List May 2024
NavMesh
1. Pathfinding using Navigation Meshes
Filipe Ghesla Silvestrim
Abstract— In order to achieve accurate movement behaviors
in three dimensional environments the path planning and
locomotion with Navigation Meshes (or NavMeshes) approach is
currently the, most used - at least by game developers, answer.
This paper describes, from theory till practice, what Navigation
Meshes are, how to generate them, how to apply pathfinding
and how to enable movement on top of it . Although we’ll be
just taking a look from a virtual world perspective, the usage
of it for hardware (i.e. Robots) can be tackled with the help of
modern sensors and processors.
I. INTRODUCTION
A key feature for virtual world simulations is the ability
of an agent to move ”intelligently” around the environment
without any human interference. Therefore, a virtual agent
should be aware of it’s world and use it in favor of finding
smart locomotion routes for it’s current task (for example,
a character should avoid walls, trees and mud if he is not
in dangerous). In order to leverage intelligent motion, as
described previously, one must apply Pathfinding (a.k.a. Path
Planning) as being the logic layer that lies between the
decision making and movement logics. Ultimately, as the
result of Pathfinding algorithms we have a Path, defined by
Jeff Huang from Brown University in one of his lectures, as
being ”a list of instructions for getting from one location to
another”.
Independent of the Pathfinding algorithm used (odds-on
that you might hear a earful about A* [1] due to the fact
that it’s a really proficient and well proven algorithm) it
can’t work directly with the virtual world data; it demands
a graph data structure of traversable paths (connections or
links) between nodes as it’s input data structure. Therefore,
in order to achieve locomotion or path planning tasks we
must first, carefully, describe the world via this kind of data
structure. At the end locomotion involves much more than
just the interpolation of the Path and ”the core pathfinding
algorithm is only a small piece of the puzzle, and its actually
not the most important” Paul Tozour [2].
A virtual world is usually designed/described by either
”grids” (2D Arrays), in case of 2D worlds, or geometry data
(popularly know as the ”polygon soup”) as for 3D worlds
as seen on Figure 1. Traditionally, in forehand, this is the
only type of data available to us. In the case of a two
dimensional world we could re-use the bi-dimensional array
as a form of graph representing the virtual environment. On
the opposite, if translating the geometrical data to a graph
structure, we would end-up with with rather complex search
space what would be unfavorable to real-time simulations of
complex virtual worlds. Being so, we must find a way to
translate the geometric data in a simplified graph structure.
Fig. 1: Left image represents a 2D World representation; Right
Image shows a 3D World geometry representation
Fig. 2: Waypoints visually configured in a 3D enviroment
One really important factor of the simplification task is to
find the balance between small search space and information
loss once a really simplified graph could deliver us bad
locomotion due to the lack of encoded information (for
example, an entity must walk slower in a mud area, but if the
graph was so simplified that it missed the mud it would lead
in in a non realistic movement simulation for our agent).
Before the introduction of Navigation Meshes, the most
common type of environment graph representation was
through manually created Waypoints (also know as Path
lattice or Meadow Maps): three dimensional spheres con-
nected to each other as seen on Figure 2. Besides being
a good approach due the control given over the shape and
information contained in the Graph, it was flaw once the
optimal path would likely not be in the Graph (for precision
we must have more waypoints what would increase the
complexibility of the search space) and time consuming due
to it’s manual nature.
Snook [3], in pursuance of a better approach for the
creation of a near optimal 3D Graph, introduced Navigation
Mesh: a static geometry that represents abstractly the three
dimensional world and contains Pathfinding informations. In
other words, he proposed am auxiliary geometry (composed
of a set of connected convex polygons - see Figure 3) to
represent the ”walkable” space (does not need necessarily to
2. Fig. 3: In navy blue, the NavMesh geomtry
represent the visible area) of the 3D World that would than
be further translated into a Graph on which the Pathfinding
and Movement algorithms are applied on top.
II. NAVIGATION MESHES
With it’s first appearance as a publication in the Game
Programming Gems, Navigation Mesh [3] (a.k.a. NavMesh)
is a simple 3D geometry representing the ”walkable” space
of the virtual world used for Pathfinding and first seen in
action in the game Counter Strike: Source leveraging the
Pathfinding for the Bots (AI NPCs).
The NavMesh is by definition a static geometry composed
of convex polygons as being the only way to guarantee that
the agent can move in a straight line within a polygonal
node, in which relevant informations (such as the terrain
type within that polygon) can be encoded. The way that the
geometry was modeled will determine how the nodes in the
Search Graph are connected and therefore enable or disable
routes between two polygons. If the vertices of two polygons
share a common edge they are inferred to be connected in
the graph and consequently the virtual agent route can cross
(”walk”) over both polygons.
A NavMesh Graph node represents directly a polygon in
the NavMesh geometry. Therefore, the simplification in the
order of polygons, of the NavMesh geometry, influences
directly on the complexity of the search space. Conse-
quentially, this is the reason why, most real case scenario,
NavMeshes geometries are usually composed of convex
polygons with more than three sides.
In regards to the ”walkable” therm, we should acknowl-
edge the fact that static obstacles (or obstructions) should
not be part of the NavMesh and therefore their absent is
crucial for the creation of a proper search graph. Although
not serving to a collision avoidance goal, Pathfinding ap-
proaches can benefit from thee information encoded in it’s
path nodes to aid static collision avoidance by simplifying
the 3D collision to 2D edge collisions (best described in the
following section).
In theory, the same geometry used to build the ground
Fig. 4: The translation from polygons to Graph nodes from a
NavMesh
of our virtual world could also be used as the Navigation
Mesh; in practice, this mesh is usually too detailed and
encodes too much information and therefore ends-up being
performance inefficiently due to it’s complex search space
and huge amount of information that must be decoded to
prompt motion. With this in mind NavMesh geometries are
usually built manually by 3D Artists or Level Designers, or
(semi)automatically generated - having as it’s input either
the world geometry or some helper entities. The NavMesh
geometry is not intended to be visible to the end user, we
use it only as a form of data storage for our Pathfinding.
In order to be able to apply Pathfinding algorithms over
our Search Graph we first must to understand how to generate
it. From a high-level perspective the Search Graph is built by
interating through each polygon in the mesh, marking it as a
node, and checking if the polygons share any adjacent edge,
if so, their nodes ”connect” together in the graph (as seen in
Figure 4). Usually relevant informations such as the vertices
and other geometry data are stored in the nodes (either via
raw data or via reference to look-up tables) as well as some
user or custom data (such as the type of the terrain or some
similar real-time relevant data).
After acquiring the 3D Graph representation - what will be
used as the Search Graph, a Path (Figure 5) must be retrieved
by applying any sort of 2D Pathfinding algorithm (such as
A*).
Finally, in order to utilize the NavMesh at real-time
projects, developers utilize either the raw NavMesh geometry
or the Search Graph as a static pre-processed input solution
to the Pathfinding system.
III. MOVEMENT
In order to enable our virtual agent to move on the
environment we must interpolate, through time, it’s position
in relation to the next available node in the Path list. As
for maters of focus this paper will concentrate solely on
locomotion methods as the result of the NavMesh Search
Graph Path iteration. In order to understand how to use the
NavMesh geometry to control objects movement (i.e. for first
3. Fig. 5: The result of A* applied on the given Graph. Green means
origin, red means target
person movement control) I recommend reading Snook [3]
and Millington [4].
By noticing that the movement is generated by iterating on
the nodes of the Path list, and by knowing that each node in
the list represents a polygon in the three dimensional space,
a valid question could be raised: Regarding the next node,
where, spatially, is our agent heading to? In order to answer
this question we must further understand it’s geometry: it’s
made of Faces, Edges and Vertices. Knowing this we now
have three possible answers: the centroid of the Polygon (or
from all faces); the Polygon Vertices; and the middle of an
edge.
The decision of which spacial reference is to be used
comes down to the environment and to the agent’s ”way
of moving”. For this paper I assume that we are focusing
on Humanoid agents and intend to produce human-like
movements that are aesthetic believable. As a result of the
given statement we can straightaway discard the Polygon
Vertices as reference - as it would produce a non-human
like movement by ”surrounding” the environment (Figure
6). We could then, instead, use the polygon center, what’s
neither aesthetically good due to zigzag alike movement nor
optimal once it does not result into the shortest path, and in
the end could produce wrong paths by not acknowledging
obstructions (.a.k.a. wholes in the mesh - see Figure 7).
Eventually, it only rest to use the edge mid-points as the
most viable option by exclusion (Figure 8).
IV. MOVEMENT SMOOTHING
As seen previously, the middle-edge is the best reference
for movement. This alternative is prominent once no further
processing in the data (Path list) is needed to generate
movement. This approach is, nevertheless, neither aesthet-
ically accurate nor ”shortest path” optimal, what can be
understandable for small projects with agents moving in
a Robot alike way; but cannot be accepted once targeting
human alike movements. Accordingly, we must go one step
further and use the NavMesh Path as the input to our
Movement Smoothing algorithm what will return a new list
Fig. 6: Represented in blue arrows, the nodes of the Movement
List once using vertices as the node spacial reference
Fig. 7: Represented in blue arrows, the nodes of the Movement List
once using polygon center of mass as the node spacial reference
Fig. 8: Represented in blue arrows, the nodes of the Movement
List once using the middle-edge as the node spacial reference
(the Movement List) - replacing the Path list in matter of
movement.
A. Line of Sight
In order to smooth the Motion Path (movement projection
through the Movement List) one of the simplest techniques
is the line-of-sight approach [3] in which a line of motion
(agent’s forward vector) will be always looking at the furthest
visible middle-edge. In non-technical words, described as the
”horizon” approach, on which the destination of the next
movement point is equal to the furthest visible point in the
visible horizon.
With this method we pre-process (calculate just once -
instead of doing it iterative on real-time) our Movement List
by running a visibility-test algorithm through the NavMesh
Path. The line of sight method starts by creating an empty
Movement List on which the first movement node is equal to
the current position of the agent (what can also be first node
4. Fig. 9: The Line Of Sight Motion Path in blue arrows
in the NavMesh Path - depending on the implementation);
from this point on, on each following node in the path, a
line-of-motion algorithm (where simple Raycast can serve
the purpose) is applied in order to check for visible: in case
of a positive result the node is skipped (in order to smooth
the movement path) and the algorithm run further by iterating
through the further Path nodes until it finds either a node N
that’s not visible - so the method adds the node N − 1 as
the next in the movement list (represented by the red arrow
in the Figure 9), or the final path node.
This technique is not only limited to movement. It could
be also used in applications of agent recognition in which
one can test very quickly if two agents are seeing each
other by using this technique over a NavMesh Visibility
Path - being the polygons nodes ”connecting” both agents.
At the same time that the applicability of this example is
very narrowed this exemplification serves as an inspiration
over the possibilities that could be opened via these kind of
techniques.
B. Iterative Constraint Rays
In attempt to avoid the ”unnecessary” usage of the middle-
edges, what usually results in a longer Motion Path than
the one that would actually be required by a human-like
agent, and in order to be performance non-intensive on the
CPU, O’Neill on his publication [5] presented the Iterative
Constraint Rays technique - designed to run iteratively on
real-time.
Based on a ”Field Of View” approach O’Neill’s technique
uses the agent’s forward vector as the motion propel, the
vector is clamped to the Field Of View (or FOV) consisted
by the agent’s current position plus both vertices of the next
visible edge (as the white dashed arrows on Figure 10);
further, on a final tune, the forward vector clamped once
more, but now against the normalized od (agent’s origin -
destination point) vector represented by the green dashed
arrow on Figure 10. This define one look-ahead ”hop” on
O’Neill’s technique.
Although a single hop (iterates just over one node N on
the Path List) always provide a valid forward vector, the
algorithm was designed to run multiple iterations in order to
deliver accurate forward vectors. This technique provides a
shorter path than the Lines Of Sight approach and is pretty
straight forward to be implemented.
Fig. 10: The Iterative Constraint Rays Motion Path in blue arrows
Fig. 11: The Reactive Path Following Rays Motion Path in blue
arrows
C. Reactive Path Following
In an attempt to smooth the straight-lines robotic move-
ment as result of the Line Of Sight technique, and eager
to improving it’s performance by avoiding motion path re-
computation, Michael Booth from Valve [6] presented the
Reactive Path Following technique as a movement iterative
(real-time) approach on which the agent moves towards
a ”look-ahead” point along the NavMesh Path (just like
O’Neill’s approach) and makes uses of local obstacle avoid-
ance in order to ”react” to possible static (or dynamic) colli-
sion as the cause of the agent’s motion. In order to visualize
his technique via profane words, it could be represented as
the agent being a lego brick, on top of a skate board (being
the collision box) which has a string (or rope) attached of
length L (distance of our look-ahead) whereby we pull it in
direction to our next defined node of the NavMesh.
The low level aspects of this technique, being real-time
iterative, lies firstly on finding the look-ahead point - at
the beginning or after each time that the agent changes it’s
NavMesh node position - as being the next N + 1 non-
obstructed middle-edge node in the NavMesh Path (as the
result of the Line Of Sight technique - seen as the green
arrow on the Figure 11) which will serve as the forward
vector for it’s movement; and secondly, on the local path
avoidance, what generates a pseudo steering behavior by
calculation the reaction vector of the bounding box collision
(represented by the agent’s red bounding-box half on Figure
11) by adding this new vector to the forward movement.
5. Fig. 12: The Simple Stupid Funnel Motion Path in blue arrows
This technique has, one my eyes, one key advantage over
other kind of path smoothing techniques: it allows a ”cheap”
local avoidance approach for dynamic objects such as other
NPCs. As well, this technique was proofed in action once
all NPC actors in the game Left 4 Dead employ it.
D. Simple Stupid Funnel Algorithm
To finish our research over movement smoothing ap-
proaches, I’d like to introduce one of the most simple, yet
powerful solution: The Simple Stupid Funnel Algorithm.
Firstly introduced by Mononen [7] as an attempt to overcome
the standard middle-edge approach for path smoothing (what
makes him really sad), this technique can be describe as
simple as ”pulling tight a string” from the start to the
destination point (see Figure 12).
From a low-level perspective, Mononen uses, on it’s
principle, a similar approach to the one describe by O’Neill
[5] but instead having, what I called, FOV he has ”Funnels”.
By generating a Movement List, this technique can be
described (as well as the Line Of Sight) as a run once
technique. In order to generate the list Mononen’s algorithm
can be simplified to the following: An iterative algorithm
that ”walks” over all the nodes in the NavMesh Path list and
keep tracks of the leftmost (green dashed arrow on Figure
13) and rightmost (red dashed arrow on Figure 13) sides of
the funnel of the current movement node; by sequentially
iterating respectively over the left and right sides of the
following NavMesh Path nodes the funnel narrows up to
the point where the sides cross each other (second frame on
Figure 13) - at this point the last non crossed side vertex
get’s added to the movement list. After that the algorithm
iterates until it finds the final node in the NavMesh Path list.
V. MESH GENERATION
Up to this point we covered all the points to be discussed
in order to apply Navigation Meshes to Pathfinding and
Movement, but all of this was based in the assumption that
the Navigation Mesh Geometry was already there - either
created by an Artist or by a Level designer in a static way.
In this section we’ll briefly discuss about two complementary
methods of automatic generation for NavMeshes having as
it’s input the geometry of the Virtual World.
Fig. 13: Left picture: First Frame of the Algorithm iteration. Right
picture: Second Frame of the iteration showing the leftmost crossing
the rightmost resulting in a new node in the Movement List
A. Optimal Convex Partition
Inspired by Hertel-Mehlhorn algorithm (basically defined
as an iterative process of surface triangulation and removal
of non-essential edges)definition [8], Tozour [2] proposes
a technique composed of five steps: Merging Neighbor
Nodes; 3 -¿ 2 Merging; Culling Trivial Nodes; Handling
Superimposed geometry; Re-Merging.
Before starting with the process we must retrieve the
floor geometry by iterating through the entire input geometry
(usually the virtual world/level itself) and check which faces
have their normal facing up. In order to recognize slopes or
ramps we can check the angle between the normal and the
virtual horizon against a threshold to determine if the face is
a walkable surface or not. After this process we should have
our, still too high poly, geometry. In theory we could use it
as our NavMesh, but as seen in previous sections this is not
what we want due to the search space complexity.
The Merging Neighbor Nodes step consist, merely, the
application of the Hertel-Mehlhorn algorithm in which pairs
of adjacent polygons that share the same two vertices along
the shared edge are merged into one bigger convex polygon
(see 14). The 3 -¿ 2 Merging step includes the identification
of two adjacent polygons that share exactly one vertex (let’s
name them A and B for illustration) and the polygon adjacent
to both of those (let’s call C in this case), after identifying
those we must check if A and B share an parallel edge to
C sharing one vertex in common, if these conditions are
meet the merge process which consists of creating in A and
B a parallel virtual edge to C that will, first tesselate the
current geometry, and than remove C’s edge of reference
(see 15). Now, for the Culling Trivial Nodes step there’s not
much extra to say about it; we must cull geometries that are
too small in area in order to reduce the search complexity.
Almost there, with the complete NavMesh Geometry we
could, theoretically, use it for the Pathfinding, but obstacles
would still be included on it; this step is all about recursive
subdivision of a polygons that intersects with an obstacle
geometry, so after the subdivision (until certain minimum
polygon size) we check if the new polygon intersects with the
obstacle or not, in that case we simple remove the polygon
from the mesh. At the end we created a lot of unnecessary
polygons with the last step, we must than just Re-Merge by
6. Fig. 14: Visual representation of the Hertel-Mehlhorn algorithm
Fig. 15: Visual representation of the 3 -¿ 2 Merging algorithm
Fig. 16: On the left side a not-so-optimal automation using Tozour
[2] technique. On the right side, after the arrow, the proposed result
once applying Farnstrom [9] technique
applying the first step again.
B. Polygon Subdivision Algorithm
After analyzing and using Tozour [2] method, Fredrik
Farnstrom [9] found situations (Figure 16) on which the
previous method wouldn’t fit appropriately and with that
he proposes the Polygon Subdivision Algorithm technique
which accounts, primarily, for cutting the floor geometry
against the edges of obstructions.
In a nutshell, Farnstrom’s technique is composed of eight
main steps: (1) Separating the ground mesh (floor) and the
obstruction meshes (see Figure 17); (2) Merge all polygons
within the respected meshes in order to reduce the data; (3)
Merging (or fusing) together overlapping and nearby vertices
aligned polygons (in order to account for stairs steps, for ex-
ample); (4) Subdivide the floor mesh with its own extrusion
upwards and merge in order to account for obstacles (such as
chests, ...); (5) Extrude downwards the obstruction mesh in
order to account for too low, unreachable, places (see Figure
18); (6) Subdivide the ground mesh with the obstruction
mesh, in order to remove obstruction areas, and merge;
(7) Fuse ground mesh polygons; (8) Remove disconnected
polygons.
Fig. 17: A ground polygon being ”cut” along the obstruction
intersection with the floor
Fig. 18: Obstructions are being extruded (b) to below in order to
remove unreachable areas (d)
VI. DISCUSSION
So far in this paper we have seen all the methods and tech-
niques involved in the creation and utilization of NavMeshes.
The Pathfinding over the NavMesh nodes leverages us a
Global locomotion with static object collision; usually move-
ment approaches (with some exceptions) do not account for
local motion and dynamic collision and therefore this must
be acknowledge via other methods of steering or dynamic
collision.
In regards to the Graph Search we just discussed and
described one Graph Search per virtual world, but this is
not (and should not) always the truth. In order to reduce the
search complexity and avoid sections of the world that are
not usually accessed (i.e. some locked rooms) we can make
use of multiple interconnected Navigation Mesh Graphs on
which a node in the hierarchy would contain the ”portal”
information by referencing the next node of another Graphs.
Aside of this, in regards of motion, usually the radius of
the agent is included in the movement algorithm in order
to avoid corners by retracting (shrinking) the edges used
for motion. As well, for dynamic collision an approach that
could provide us the best results would be the application of
the Simple Stupid Funnel Track with the steering behavior
described in the Reactive Path Following technique. Another
solution for the given problem would be the creation of a
local 2D ”fine” grid that follows the agent on which an A*
algorithm would be applied.
Another important topic that wasn’t covered significantly
in this paper is the possibility of encoding information in the
NavMesh polygons. The information stored in the polygons
can differ from the kind of terrain (in order to dictate steps
sounds or animations to be played - like climb, swim, walk,
...) to gameplay informations (i.e. in this region the agent
can heal itself).
In order to all of this to work, our NavMesh geometry
must be ”optimally” created. In this paper we briefly saw
two of the main techniques applied for automated generation
of NavMesh geometry. Although this being a feature that
almost any game engine (for example) has, not all game
developers use due to the fact that it usually does not gives
enough freedom to the level designers (who usually want
to either fix corner cases or remove regions that shouldn’t
7. be walked). In order to avoid this we have three common
escapes: (1) the level designers could create some invisible
geometries and place on top of the world geometry; or (2)
the virtual world geometry encodes some information that
the automatic generation needs to take in account - what
usually does not work due to the fact that the level designer
would stop the 3D Designer workflow all the time to change
NavMesh encodings; and finally (3) the custom generation
of NavMesh geometry via visual helpers which could, for
example, define the boundaries of the virtual world and be
used to tessellated the navMesh geometry. At the end, this
is the part that most differ from application to application
and cannot be set in stone - just think that we could have a
game in which all agents walk on the walls and everything
discussed so far about automatic generation could be thrown
partially away.
Finally, usually though as being a ”virtual” world only
approach, NavMeshes could, theoretically, be used for vir-
tual world approaches on which we have a previously the
topological information of the place (via blue-prints, satellite
information or drones scanning the area) which we would use
to leverage path planing to our robot (for example). I could
totally see this working with Mars Rover for example after
that a Mars satellite would have scanned all Mars surface.
VII. CONCLUSIONS
Navigation Meshes are one example that three dimensional
Path Planning does not necessarily needs to involve complex
3D Math. We can achieve 3D movement in a 3D environment
by applying familiar 2D algorithms due to the 2D nature of
the Search Graphs. Not to forget, the NavMesh geometry
is just a mean to achieve the Search Graph, what’s the one
that usually used as offline input of the simulation. With this
technique we, at the same time, reduce the complexity of
the search space of 3D worlds and do not limit the freedom
of movement. As seen, the NavMesh geometry should be
generated with the approach that feats the nature of the
application being developed.
REFERENCES
[1] N. J. R. B. Hart, P. E.; Nilsson, “A formal basis for the heuristic
determination of minimum cost paths,” IEEE Transactions on Systems
Science and Cybernetics SSC4, vol. 4, no. 2, pp. 100–107, 1968.
[2] S. Rabin, Ed., AI Game Programming Wisdom. Charles River Media,
2002, vol. 1.
[3] M. A. DeLoura, Ed., Game Programming Gems. Charles River Media,
2000, vol. 1.
[4] I. Millington, Artificial Intelligence for Games. Elsevier, 2006.
[5] J. C. O’Neill, “Efficient navigation mesh implementation,” Journaal of
Game Development, vol. 939, no. 9, pp. 71–90, March 2004.
[6] V. Inc. Ai systems of l4d. [Online]. Available: https://developer.
valvesoftware.com/wiki/AI Systems of L4D
[7] M. Mononen. Simple stupid funnel algorithm.
[Online]. Available: http://digestingduck.blogspot.de/2010/03/
simple-stupid-funnel-algorithm.html
[8] J. O’Rourke, Computational Geometry in C, 2nd ed., M. A. DeLoura,
Ed. Charles River Media, 2000.
[9] S. Rabin, Ed., AI Game Programming Wisdom 3. Charles River Media,
2006, vol. 3.