This document discusses algorithms for solving the point in polygon problem for arbitrary polygons. It presents two main concepts: the even-odd rule and the winding number rule. It shows that both concepts are closely related and can be based on determining the winding number. The document derives an incremental angle algorithm for computing the winding number and modifies it to accelerate the computation and handle special cases. It compares the resulting winding number algorithm to those found in literature.
I am Martina J. I am a Signals and Systems Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Maryland. I have been helping students with their assignments for the past 9 years. I solve assignments related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems assignments.
I am Martina J. I am a Signals and Systems Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Maryland. I have been helping students with their assignments for the past 9 years. I solve assignments related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems assignments.
Double Robustness: Theory and Applications with Missing DataLu Mao
When data are missing at random (MAR), complete-case analysis with the full-data estimating equation is in general not valid. To correct the bias, we can employ the inverse probability weighting (IPW) technique on the complete cases. This requires modeling the missing pattern on the observed data (call it the $\pi$ model). The resulting IPW estimator, however, ignores information contained in cases with missing components, and is thus statistically inefficient. Efficiency can be improved by modifying the estimating equation along the lines of the semiparametric efficiency theory of Bickel et al. (1993). This modification usually requires modeling the distribution of the missing component on the observed ones (call it the $\mu$ model). Hence, when both the $\pi$ and the $\mu$ models are correct, the modified estimator is valid and is more efficient than the IPW one. In addition, the modified estimator is "doubly robust" in the sense that it is valid when either the $\pi$ model or the $\mu$ model is correct.
Essential materials of the slides are extracted from the book "Semiparametric Theory and Missing Data" (Tsiatis, 2006). The slides were originally presented in the class BIOS 773 Statistical Analysis with Missing Data in Spring 2013 at UNC Chapel Hill as a final project.
This presentation is a part of Computer Oriented Numerical Method . Newton-Cotes formulas are an extremely useful and straightforward family of numerical integration techniques.
AN EFFICIENT LINE CLIPPING ALGORITHM FOR CIRCULAR WINDOWS USING VECTOR CALCUL...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases
for clipping using circular windows have grown considerably. This paper presents an efficient clipping
algorithm for line segments using geometrical features of circle and vector calculus. Building upon the
research with rectangular windows, this method is proposed with the belief that computations are more
expensive (heavy) than other computations. Execution time can be drastically improved if we replace
expensive computations with cheaper computations. The cheaper computations can be computed even more
efficiently using parallelization thus improving time complexity.
An Efficient Line Clipping Algorithm for Circular Windows Using Vector Calcul...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases for clipping using circular windows have grown considerably. This paper presents an efficient clipping algorithm for line segments using geometrical features of circle and vector calculus. Building upon the research with rectangular windows, this method is proposed with the belief that computations are more expensive (heavy) than other computations. Execution time can be drastically improved if we replace expensive computations with cheaper computations. The cheaper computations can be computed even more efficiently using parallelization thus improving time complexity.
AN EFFICIENT LINE CLIPPING ALGORITHM FOR CIRCULAR WINDOWS USING VECTOR CALCUL...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases
for clipping using circular windows have grown considerably. This paper presents an efficient clipping
algorithm for line segments using geometrical features of circle and vector calculus. Building upon the
research with rectangular windows, this method is proposed with the belief that computations are more
expensive (heavy) than other computations. Execution time can be drastically improved if we replace
expensive computations with cheaper computations. The cheaper computations can be computed even more
efficiently using parallelization thus improving time complexity.
Measurement
of
the
angle
θ
For
better
understanding
I
am
showing
you
a
different
particle
track
diagram
bellow.
Where
at
point
C
particle
𝜋! 𝑎𝑛𝑑 Σ!
are
created
and
the
Σ!
decays
into
𝜋∓ 𝑎𝑛𝑑 K!
particles
The
angle
θ
between
the
π−
and
Σ−
momentum
vectors
can
be
determined
by
drawing
tangents
to
the
π−
and
Σ−
tracks
at
the
point
of
the
Σ−
decay.
We
can
then
measure
the
angle
between
the
tangents
using
a
protractor.
Alternative
method
which
does
not
require
a
protractor
is
also
possible.
Let
AC
and
BC
be
the
tangents
to
the
π−
and
Σ−
tracks
respectively.
Drop
a
perpendicular
(AB)
and
measure
the
distances
AB
and
BC.
The
ratio
AB/BC
gives
the
tangent
of
the
angle180◦−θ.
It
should
be
noted
that
only
some
of
the
time
will
the
angle
θ
exceed
90◦
as
shown
here.
Determining
the
uncertainty
of
Measurements
In
part
B,
It
is
asked
to
estimate
the
uncertainty
of
your
measurements
of
𝜃
and
r.
Uncertainty
of
measurement
is
the
doubt
that
exists
about
the
result
of
any
measurement.
You
might
think
that
well-‐made
rulers,
clocks
and
thermometers
should
be
trustworthy,
and
give
the
right
answers.
But
for
every
measurement
-‐
even
the
most
careful
-‐
there
is
always
a
margin
of
doubt.
It
is
important
not
to
confuse
the
terms
‘error’
and
‘uncertainty’.
Error
is
the
difference
between
the
measured
value
and
the
‘true
value’
of
the
thing
being
measured.
Uncertainty
is
a
quantification
of
the
doubt
about
the
measurement
result
Since
there
is
always
a
margin
of
doubt
about
any
measurement,
we
need
to
ask
‘How
big
is
the
margin?’
and
‘How
bad
is
the
doubt?’
Thus,
two
numbers
are
really
needed
in
order
to
quantify
an
uncertainty.
One
is
the
width
of
the
margin,
or
interval.
The
other
is
a
confidence
level,
and
states
how
sure
we
are
that
the
‘true
value’
is
within
that
margin.
You
can
increase
the
amount
of
information
you
get
from
your
measurements
by
taking
a
number
of
readings
and
carrying
out
Optimum range of angle tracking radars: a theoretical computingIJECEIAES
In this paper, we determine an optimal range for angle tracking radars (ATRs) based on evaluating the standard deviation of all kinds of errors in a tracking system. In the past, this optimal range has often been computed by the simulation of the total error components; however, we are going to introduce a closed form for this computation which allows us to obtain the optimal range directly. Thus, for this purpose, we firstly solve an optimization problem to achieve the closed form of the optimal range (Ropt.) and then, we compute it by doing a simple simulation. The results show that both theoretical and simulation-based computations are similar to each other.
Exact Sum Rules for Vector Channel at Finite Temperature and its Applications...Daisuke Satow
Slides used in presentation at:
“International School of Nuclear Physics 38th Course Nuclear matter under extreme conditions -Relativistic heavy-ion collisions”, in September, 2016 @ Erice, Italy
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
With Microsoft prePress, you can access just-written content from upcoming
books. The chapters come straight from our respected authors, before they’re
fully polished and debugged—for critical insights now, when you need them.
This document contains one or more portions of a preliminary version of a Microsoft Press title and is provided
“as is.” The content may be changed substantially upon final publication. In addition, this document may make
reference to pre-released versions of software products that may be changed substantially prior to final
commercial release. Microsoft reserves the right to not publish this title or any versions thereof (including
future prePress ebooks). This document is provided for informational purposes only. MICROSOFT MAKES NO
WARRANTIES, EITHER EXPRESS OR IMPLIED, IN THIS DOCUMENT. Information and views expressed in this
document, including URL and other Internet website references may be subject to change without notice. You
bear the risk of using it.
En principio, para entender con facilidad esta obra es recomendable estar familiarizado
con los conceptos básicos de programación orientada a objetos, en particular con los
lenguajes de programación C++ o Java de los que C# deriva.
Sin embargo, estos no son requisitos fundamentales para entenderla ya que cada vez que
en ella se introduce algún elemento del lenguaje se definen y explican los conceptos
básicos que permiten entenderlo. Aún así, sigue siendo recomendable disponer de los
requisitos antes mencionados para poder moverse con mayor soltura por el libro y
aprovecharlo al máximo.
Double Robustness: Theory and Applications with Missing DataLu Mao
When data are missing at random (MAR), complete-case analysis with the full-data estimating equation is in general not valid. To correct the bias, we can employ the inverse probability weighting (IPW) technique on the complete cases. This requires modeling the missing pattern on the observed data (call it the $\pi$ model). The resulting IPW estimator, however, ignores information contained in cases with missing components, and is thus statistically inefficient. Efficiency can be improved by modifying the estimating equation along the lines of the semiparametric efficiency theory of Bickel et al. (1993). This modification usually requires modeling the distribution of the missing component on the observed ones (call it the $\mu$ model). Hence, when both the $\pi$ and the $\mu$ models are correct, the modified estimator is valid and is more efficient than the IPW one. In addition, the modified estimator is "doubly robust" in the sense that it is valid when either the $\pi$ model or the $\mu$ model is correct.
Essential materials of the slides are extracted from the book "Semiparametric Theory and Missing Data" (Tsiatis, 2006). The slides were originally presented in the class BIOS 773 Statistical Analysis with Missing Data in Spring 2013 at UNC Chapel Hill as a final project.
This presentation is a part of Computer Oriented Numerical Method . Newton-Cotes formulas are an extremely useful and straightforward family of numerical integration techniques.
AN EFFICIENT LINE CLIPPING ALGORITHM FOR CIRCULAR WINDOWS USING VECTOR CALCUL...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases
for clipping using circular windows have grown considerably. This paper presents an efficient clipping
algorithm for line segments using geometrical features of circle and vector calculus. Building upon the
research with rectangular windows, this method is proposed with the belief that computations are more
expensive (heavy) than other computations. Execution time can be drastically improved if we replace
expensive computations with cheaper computations. The cheaper computations can be computed even more
efficiently using parallelization thus improving time complexity.
An Efficient Line Clipping Algorithm for Circular Windows Using Vector Calcul...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases for clipping using circular windows have grown considerably. This paper presents an efficient clipping algorithm for line segments using geometrical features of circle and vector calculus. Building upon the research with rectangular windows, this method is proposed with the belief that computations are more expensive (heavy) than other computations. Execution time can be drastically improved if we replace expensive computations with cheaper computations. The cheaper computations can be computed even more efficiently using parallelization thus improving time complexity.
AN EFFICIENT LINE CLIPPING ALGORITHM FOR CIRCULAR WINDOWS USING VECTOR CALCUL...ijcga
With the advent of digitization and growing abundance of graphic and image processing tools, use cases
for clipping using circular windows have grown considerably. This paper presents an efficient clipping
algorithm for line segments using geometrical features of circle and vector calculus. Building upon the
research with rectangular windows, this method is proposed with the belief that computations are more
expensive (heavy) than other computations. Execution time can be drastically improved if we replace
expensive computations with cheaper computations. The cheaper computations can be computed even more
efficiently using parallelization thus improving time complexity.
Measurement
of
the
angle
θ
For
better
understanding
I
am
showing
you
a
different
particle
track
diagram
bellow.
Where
at
point
C
particle
𝜋! 𝑎𝑛𝑑 Σ!
are
created
and
the
Σ!
decays
into
𝜋∓ 𝑎𝑛𝑑 K!
particles
The
angle
θ
between
the
π−
and
Σ−
momentum
vectors
can
be
determined
by
drawing
tangents
to
the
π−
and
Σ−
tracks
at
the
point
of
the
Σ−
decay.
We
can
then
measure
the
angle
between
the
tangents
using
a
protractor.
Alternative
method
which
does
not
require
a
protractor
is
also
possible.
Let
AC
and
BC
be
the
tangents
to
the
π−
and
Σ−
tracks
respectively.
Drop
a
perpendicular
(AB)
and
measure
the
distances
AB
and
BC.
The
ratio
AB/BC
gives
the
tangent
of
the
angle180◦−θ.
It
should
be
noted
that
only
some
of
the
time
will
the
angle
θ
exceed
90◦
as
shown
here.
Determining
the
uncertainty
of
Measurements
In
part
B,
It
is
asked
to
estimate
the
uncertainty
of
your
measurements
of
𝜃
and
r.
Uncertainty
of
measurement
is
the
doubt
that
exists
about
the
result
of
any
measurement.
You
might
think
that
well-‐made
rulers,
clocks
and
thermometers
should
be
trustworthy,
and
give
the
right
answers.
But
for
every
measurement
-‐
even
the
most
careful
-‐
there
is
always
a
margin
of
doubt.
It
is
important
not
to
confuse
the
terms
‘error’
and
‘uncertainty’.
Error
is
the
difference
between
the
measured
value
and
the
‘true
value’
of
the
thing
being
measured.
Uncertainty
is
a
quantification
of
the
doubt
about
the
measurement
result
Since
there
is
always
a
margin
of
doubt
about
any
measurement,
we
need
to
ask
‘How
big
is
the
margin?’
and
‘How
bad
is
the
doubt?’
Thus,
two
numbers
are
really
needed
in
order
to
quantify
an
uncertainty.
One
is
the
width
of
the
margin,
or
interval.
The
other
is
a
confidence
level,
and
states
how
sure
we
are
that
the
‘true
value’
is
within
that
margin.
You
can
increase
the
amount
of
information
you
get
from
your
measurements
by
taking
a
number
of
readings
and
carrying
out
Optimum range of angle tracking radars: a theoretical computingIJECEIAES
In this paper, we determine an optimal range for angle tracking radars (ATRs) based on evaluating the standard deviation of all kinds of errors in a tracking system. In the past, this optimal range has often been computed by the simulation of the total error components; however, we are going to introduce a closed form for this computation which allows us to obtain the optimal range directly. Thus, for this purpose, we firstly solve an optimization problem to achieve the closed form of the optimal range (Ropt.) and then, we compute it by doing a simple simulation. The results show that both theoretical and simulation-based computations are similar to each other.
Exact Sum Rules for Vector Channel at Finite Temperature and its Applications...Daisuke Satow
Slides used in presentation at:
“International School of Nuclear Physics 38th Course Nuclear matter under extreme conditions -Relativistic heavy-ion collisions”, in September, 2016 @ Erice, Italy
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
With Microsoft prePress, you can access just-written content from upcoming
books. The chapters come straight from our respected authors, before they’re
fully polished and debugged—for critical insights now, when you need them.
This document contains one or more portions of a preliminary version of a Microsoft Press title and is provided
“as is.” The content may be changed substantially upon final publication. In addition, this document may make
reference to pre-released versions of software products that may be changed substantially prior to final
commercial release. Microsoft reserves the right to not publish this title or any versions thereof (including
future prePress ebooks). This document is provided for informational purposes only. MICROSOFT MAKES NO
WARRANTIES, EITHER EXPRESS OR IMPLIED, IN THIS DOCUMENT. Information and views expressed in this
document, including URL and other Internet website references may be subject to change without notice. You
bear the risk of using it.
En principio, para entender con facilidad esta obra es recomendable estar familiarizado
con los conceptos básicos de programación orientada a objetos, en particular con los
lenguajes de programación C++ o Java de los que C# deriva.
Sin embargo, estos no son requisitos fundamentales para entenderla ya que cada vez que
en ella se introduce algún elemento del lenguaje se definen y explican los conceptos
básicos que permiten entenderlo. Aún así, sigue siendo recomendable disponer de los
requisitos antes mencionados para poder moverse con mayor soltura por el libro y
aprovecharlo al máximo.
This paper introduces a programming course which provides second year mechanical engineering students the opportunity to develop engineering-oriented software and Graphical User Interface (GUI) using the principles of software engineering. Two object-oriented languages, C# and MATLAB) were taught in this course to make the students familiar with different types of programming languages. Methods used in teaching this course such as topics, class/laboratory schedule, and evaluation instruments are explained. The course objectives were well met according to the student learning outcomes and several Mechanical Engineering B.Sc. program objectives were also supported by this course. This course was taught by the author at University of Louisville and will be offered in University of Louisiana at Lafayette.
El objetivo del curso es que el estudiante aprenda a reconocer los problemas
tipo de la Investigación de Operaciones de modo que sepa a qué técnico recurrir en
cada caso, para un adecuado estudio y solución del mismo.
Como su nombre lo indica, la Investigación de Operaciones (IO), o
Investigación Operativa, es la investigación de las operaciones a realizar para el logro
óptimo de los objetivos de un sistema o la mejora del mismo. Esta disciplina brinda y
utiliza la metodología científica en la búsqueda de soluciones óptimas, como apoyo
en los procesos de decisión, en cuanto a lo que se refiere a la toma de decisiones
óptimas y en sistemas que se originan en la vida real.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
SOCRadar Research Team: Latest Activities of IntelBroker
Hormann.2001.TPI.pdf
1. The Point in Polygon Problem
for Arbitrary Polygons
Kai Hormann∗
University of Erlangen
Alexander Agathos†
University of Athens
Abstract
A detailed discussion of the point in polygon problem for arbitrary
polygons is given. Two concepts for solving this problem are known in
literature: the even-odd rule and the winding number, the former lead-
ing to ray-crossing, the latter to angle summation algorithms. First we
show by mathematical means that both concepts are very closely related,
thereby developing a first version of an algorithm for determining the
winding number. Then we examine how to accelerate this algorithm and
how to handle special cases. Furthermore we compare these algorithms
with those found in literature and discuss the results.
Keywords: polygons, point containment, winding number, integer
algorithms, computational geometry.
1 Introduction
A very natural problem in the field of computational geometry is the point in
polygon test: given a point R and an arbitrary closed polygon P represented as
an array of n points P0, P1, . . . , Pn−1, Pn = P0, determine whether R is inside or
outside the polygon P. While the definition of the interior of standard geometric
primitives such as circles and rectangles is clear, the interior of self-intersecting
closed polygons is less obvious. In literature [1, 4, 5, 7, 8, 10, 12, 13], two main
definitions can be found.
The first one is the even-odd or parity rule, in which a line is drawn from
R to some other point S that is guaranteed to lie outside the polygon. If this
line RS crosses the edges ei = PiPi+1 of the polygon an odd number of times,
the point is inside P, otherwise it is outside (see Fig. 1(a)). This rule can
easily be turned into an algorithm that loops over the edges of P, decides for
each edge whether it crosses the line or not, and counts the crossings. Various
implementations of this strategy exist [2, 3, 4, 6, 8, 10, 11] which differ in the
way how to compute the intersection between the line and an edge and how this
rather costly procedure can be avoided for edges that can be guaranteed not to
cross the line. We discuss these issues in detail in Sec. 3.
∗hormann@informatik.uni-erlangen.de
†agalex@ath.forthnet.gr
1
2. (a) (b)
Figure 1: The interior of a self-intersecting polygon based on the even-odd rule
(a) and the nonzero winding number (b).
The second one is based on the winding number of R with respect to P, which
is the number of revolutions made around that point while travelling once along
P. By definition, R will be inside the polygon, if the winding number is nonzero,
as shown in Fig. 1(b). We show that the same result as with the even-odd rule
can be obtained by letting the interior consist of those points whose winding
number is odd. Therefore, both definitions of the interior can be based on the
winding number, making this concept the more general one.
In Sec. 2 we explain in detail how the incremental angle algorithm [12]
for determining the winding number can be derived mathematically. Further
analysis of this algorithm leads to a modification that turns it into a ray-crossing
algorithm, revealing that both concepts are the same in principle. The resulting
algorithm is capable of handling any special cases that might occur, e. g. R may
coincide with one of the vertices Pi of P or may lie on one of P’s edges ei.
Several methods for accelerating this basic algorithm are discussed in Sec. 3.
Of course the problem always is of complexity O(n) for arbitrary polygons, hence
“acceleration” refers to reducing a constant time factor. The complexity can
only be reduced for special polygons, e. g. if the polygon is convex, an O(log n)
algorithm can be found [7, 8, 10]. The performance of the different algorithms is
analyzed in Sec. 4 and a comparison to those found in literature is made. Sec. 5
summarizes the proposed ideas.
2 Winding Numbers
As stated in Sec. 1, the answer to the point in polygon problem can be derived
from the winding number. Starting with the mathematical definition of the
winding number, we simplify the general formula step by step until we obtain
the pseudo-code of a very intelligible algorithm that determines the winding
number of a point with respect to an arbitrary polygon.
The winding number ω(R, C) of a point R with respect to a closed curve
C(t) = (x(t), y(t))T
, t ∈ [a, b], C(a) = C(b) is the number of revolutions made
around R while travelling once along C, provided that R is not visited in doing
so. Whenever there exists t̃ ∈ [a, b] such that C(t̃) = R, the winding number
ω(R, C) is undefined. Otherwise it can be calculated by integrating the differ-
ential of the angle ϕ(t) between the edge RC(t) and the positive horizontal axis
through R (cf. Fig. 2(a)). As C(t) is a closed curve, this always yields ω · 2π
with ω ∈ Z denoting the winding number.
2
3. C
C(t)
R
'(t)
(a)
R
Pi+1
Pi
'i
P
(b)
Figure 2: The continuous angle ϕ(t) for curves (a) and the discrete signed angle
ϕi for polygons (b).
Without loss of generality we assume R = (0, 0) so that ϕ(t) = arctan y(t)
x(t) and
ω(R, C) =
1
2π
b
a
dϕ(t) =
1
2π
b
a
dϕ
dt
(t) dt
=
1
2π
b
a
ẏ(t)x(t) − y(t)ẋ(t)
x(t)2 + y(t)2
dt. (1)
A closed polygon P represented as an array of n points P0, P1, . . . , Pn−1, Pn = P0
can be seen as a piecewise linear curve t → (xi(t−i), yi(t−i))T
, t ∈ [i, i+1] with
(xi(t), yi(t))T
= tPi+1 + (1 − t)Pi. Using Eq. (1) and Appendix A we obtain
ω(R, P) =
1
2π
n−1
i=0
1
0
˙
yi(t)xi(t) − yi(t)ẋi(t)
xi(t)2 + yi(t)2
dt
=
1
2π
n−1
i=0
arccos
Pi|Pi+1
Pi Pi+1
· sign
Px
i Px
i+1
Py
i Py
i+1
(2)
=
1
2π
n−1
i=0
ϕi, (3)
where ϕi is the signed angle between the edges RPi and RPi+1 (cf. Fig. 2(b)).
Eq. (2) can be used for creating an algorithm for computing the winding
number but it involves expensive calls to the arccos and sqrt routines. Al-
though these can be accelerated by using lookup-tables and nearest-neighbour
interpolation, as we can eliminate rounding errors by rounding the final result
to the nearest integer value, this still remains a comparatively slow algorithm.
Further simplification of Eq. (3) can be achieved by considering the rounded
partial sums ŝj = 1
4
j
i=0
ϕi
π/2 with ω(R, P) = ŝn−1. This is equivalent to
counting only quarter-revolutions and can be realized as follows. Based on an
algorithm that is explained on p. 251 of Rogers’ book [9] for testing whether
a polygon surrounds a rectangular window or is disjoint to it, we classify each
3
4. R
P3
P4
P5
0
1
2 3
Figure 3: Classification of vertices Pi
by quadrants, e. g. q3 = 0, q4 = 1, and
q5 = 2.
R
P3
P4
P5
0
1
2 3
e3
e4
Figure 4: Example of half ccw- or cw-
revolution: edge e3 with δ3 = 2 is ccw,
e4 with δ4 = −2 is cw.
vertex Pi of the polygon P by the number qi of the quadrant in which it is
located with respect to R (cf. Fig. 3), i. e.,
qi =
0
1
2
3
, if arctan
Py
i
Px
i
∈
[ 0 , π
2 )
[ π
2 , π )
[ π ,3π
2 )
[3π
2 ,2π)
, resp. Px
i
≤
≥
Rx
, Py
i
≥
≤
Ry
.
Now we can define the quarter angle∗
δi = qi+1 − qi, i = 0, . . . , n − 1 for
each of the polygon’s edges ei. If δi = 0, then the corresponding edge is lo-
cated wholly in one quadrant and nothing happens. If δi ∈ {1, −3}, the edge
crosses one of the quadrant boundaries in counter-clockwise (ccw) direction
and a quarter ccw-revolution around R is made, while the reverse holds for
δi ∈ {−1, 3}. If δi ∈ {2, −2}, a further check is required to decide whether a
half ccw- or cw-revolution around R occurs by moving along the corresponding
edge ei (cf. Fig. 4). This can be done by checking the orientation of the triangle
(R, Pi, Pi+1), i. e., by finding the sign of the determinant:
sign
Px
i − Rx
Px
i+1 − Rx
Py
i − Ry
Py
i+1 − Ry
=
+
−
⇐⇒
ccw
cw
.
By further introducing the adjusted quarter angles δ̂i via the following table
2, −2
δi 0 1, −3 −1, 3
ccw cw
δ̂i 0 1 −1 2 −2
we can sum up these δ̂i to count the number of quarter ccw-revolutions around
R and get
ŝj =
1
4
j
i=0
δ̂i =⇒ ω(R, P) = ŝn−1 =
1
4
n−1
i=0
δ̂i.
∗Because of 0 ≤ qi ≤ 3 we know that −3 ≤ δi ≤ 3.
4
5. evaluation of the determinant
function det (i)
return (Px
i −Rx
)∗(Py
i+1 −Ry
)−(Px
i+1 −Rx
)∗(Py
i −Ry
)
quadrant classification
for i = 0 to n − 1
if Px
i Rx
and Py
i ≥ Ry
: qi = 0
if Px
i ≤ Rx
and Py
i Ry
: qi = 1
if Px
i Rx
and Py
i ≤ Ry
: qi = 2
if Px
i ≥ Rx
and Py
i Ry
: qi = 3
qn = q0
determination of winding number
ω = 0
for i = 0 to n − 1
switch qi+1 − qi :
1, −3 : ω = ω + 1
−1, 3 : ω = ω − 1
2, −2 : ω = ω + 2 ∗ sign of det (i)
return ω/4
Algorithm 1: First version of a winding number algorithm.
This leads to the first version of a winding number algorithm (Algo. 1) which
resembles the incremental angle algorithm in [12]. It can further be improved
by exploiting the following observation:
n−1
i=0
δi =
n−1
i=0
(qi+1 − qi) = qn − q0 = 0 =⇒ ω(R, P) =
1
4
n−1
i=0
(δ̂i − δi),
i. e., we just need to sum up the differences δ̂i − δi, which are nonzero only for
δi ∈ {−3, −2, 2, 3}†
. Thus, by defining
δ̄i =
1, if δi ∈ {−3, −2 ccw}
−1, if δi ∈ {3, 2 cw}
0, else
we get
ω(R, P) =
n−1
i=0
δ̄i,
which results in a slight modification of the first algorithm (Algo. 2) and im-
proves the performance by approximately 5 %.
However, besides this small acceleration it is far more important to notice
that the algorithm has now turned into a ray-crossing method. In fact, by
disregarding all edges with δi ∈ {−1, 1}, the remaining cases relate to edges that
cross the horizontal ray‡
= {R + λ
1
0
, λ ≥ 0}. The difference to the even-odd
†And we have δ̂i − δi = ±4 in these cases.
‡This refers to the special choice of S =
+∞
Ry
in the definition of the even-odd rule.
5
6. .
.
.
ω = 0
for i = 0 to n − 1
switch qi+1 − qi :
−3 : ω = ω + 1
3 : ω = ω − 1
−2 : if det (i) 0 : ω = ω + 1
2 : if det (i) 0 : ω = ω − 1
return ω
Algorithm 2: Modification of Algo. 1.
rule where only the number of crossings is counted is that edges starting below
this ray and ending above it are counted +1 and the others −1 (cf. Fig. 5).
Nevertheless, the winding number ω still shifts from even to odd and back
with every crossing, so that testing the parity of ω exactly gives the even-odd
definition.
Therefore, we have presented an algorithm that is capable of solving the point
in polygon problem for both definitions of the interior of an arbitrary polygon:
the one based on the even-odd rule and the other based on the nonzero winding
number. In contrast to the statement of O’Rourke, that the determination of
the winding number depends on “floating-point computations, and trigonomet-
ric computations in particular” [8], this algorithm gets by with integer arithmetic
except for the det function which needs floating-point operations if the coor-
dinates are non-integer. However, divisions (neither integer nor floating-point)
are totally avoided.
Another advantage of this algorithm is the handling of degenerate cases,
which can cause trouble in other algorithms, as e. g. Foley et al. point out that
“the ray must hit no vertices of the polyline” [1]. However, the quadrant-
classification of the vertices Pi naturally avoids these degeneracies. In Fig. 6,
the regular polygon segment P1, P2 as well as the degenerate segment sequence
P6, P7, P8, P9 should count +1, while the sequence P4, P5, P6 should count −1
and P2, P3, P4 should not be regarded as a crossing at all, thus resulting in
ω(R, P) = 1.
%
i
=
;
3
R
i
=
;
2
c
c
w
i
=;2ccw
(a)
%
R
i =2cw
i
=
3
i
=2cw
(b)
Figure 5: All edges (arrows indicating direction from Pi to Pi+1) crossing from
below the ray to above are counted +1 (a), the others −1 (b).
6
7. P1
P2
P3
P4
P5
P6
P7
P8
P9
%
R
e1
e2
e3
e4
e5
e6
e7
e8
Figure 6: Degenerate intersections of and P.
The classification scheme guarantees that no vertex can ever coincide with
the ray , because either Py
i ≥ Ry
, then Pi is above the ray, or Py
i Ry
which
holds for all vertices lying below . Therefore, the edges e2, e3, e4 and e7 are
ignored as all the vertices adjacent to these edges are classified as 0-quadrant-
vertices. On the other hand, edges e1 and e6 will be recognized as positive
crossings (δ̄1 = δ̄6 = 1) and e5 as a negative one (δ̄5 = −1). All other edges,
including e8 do not affect the determination of ω(R, P).
Another important feature of the algorithm is that it can easily be modified
to recognize the special case of R lying on the boundary of P, which may lead
to ambiguities in some other algorithms. We distinguish two different cases:
firstly, R may coincide with one of the vertices Pi of P, which can be detected
by inserting the line
if Px
i = Rx
and Py
i = Ry
: exit vertex code
into the quadrant classification loop. Secondly, R may lie on one of P’s edges
ei. In this case, the angle between RPi and RPi+1 is always ±π, so that the
classification scheme assigns two diagonally opposite quadrants (0 and 2, or 1
and 3) to the vertices Pi and Pi+1, hence δi = qi+1 − qi = ±2. This always
invokes the det function, which returns 0 in this case. Thus, by replacing this
function with
function det (i)
d = (Px
i − Rx
) ∗ (Py
i+1 − Ry
) − (Px
i+1 − Rx
) ∗ (Py
i − Ry
)
if d = 0
exit edge code
else
return d
the algorithm is able to detect this case, too. In the remainder we will refer to
this modification as the boundary version.
3 Efficient Implementation
For many applications the algorithms of the previous section are sufficient. They
are robust, correct and easy to understand which always helps to reduce the
probability of an implementation bug. But in other applications this routine
might be called so often that it turns out to be a bottleneck. We now discuss
7
8. function classify (i)
if Py
i Ry
return (Px
i ≤ Rx
)
else
if Py
i Ry
return 2 + (Px
i ≥ Rx
)
else
if Px
i Rx
return 0
else
if Px
i Rx
return 2
else
exit vertex code
Algorithm 3: Efficient quadrant
classification.
ω = 0
for i = 0 to n − 1
horizontal line crossed?
if (Py
i Ry
and Py
i+1 ≥ Ry
) or
(Py
i ≥ Ry
and Py
i+1 Ry
)
crossing to the right?
if (det (i) 0 and Py
i+1 Py
i ) or
(det (i) 0 and Py
i+1 Py
i )
modify winding number
if Py
i+1 Py
i
ω = ω + 1
else
ω = ω − 1
return ω
Algorithm 4: Computing the winding num-
ber without quadrant classifications.
how the basic algorithms can be accelerated, ending up with two very efficient
versions: the efficient standard algorithm, that is very short but does not care
about the special case of R lying on the boundary of P and the efficient boundary
algorithm, that needs a little more code but handles that special case.
Looking at Algo. 1 and 2, there are three parts that can be improved: the
det function, the quadrant classification and the determination of the winding
number. The det function can be declared as inline which saves a few clock
cycles for the function call but there is no way to accelerate the actual calculation
of this determinant. Likewise one can try to break up the switch structure in
the third part into a series of sophisticated if/else statements but this does
not really accelerate the algorithm considerably.
However, the quadrant classification can be improved a lot. In the present
version, the average number of comparisons that have to be evaluated for each
vertex is 6, assuming the compiler generates short circuit evaluation§
. By simply
adding an else statement to the end of each line this number can be reduced to
4. A more sophisticated decision tree which only needs slightly more than 2.5
comparisons per vertex and is also able to detect the case of vertex coincidence
is shown in Algo. 3. Note that we have followed the C convention that logical
expressions are equal to 1 if they are true and 0 otherwise in order to reduce the
length of the code. The use of this classification variant accelerates the basic
algorithms of the previous section by more than 30 % (cf. Fig. 11).
Unfortunately this is the maximum speed-up we can get out of our basic
idea and we need to restructure the algorithm for further improvement. First
of all we can combine the two loops because both of them range over the same
interval i = 0 . . . n − 1. Then we can eliminate the array q as we can process
the quadrant numbers on the fly and do not need an explicit storage of these
values. All we need for each single pass of the loop are the quadrant numbers
of the vertex where the edge ei that is currently processed begins and the one
where it ends, namely qb and qe. This leads to the following simplification
§I. e., the second operand of an and operator is only evaluated if the first one is true.
8
9. qb = classify (0)
for i = 0 to n − 1
qe = classify (i+1)
switch qe − qb :
.
.
.
qb = qe
and reduces memory usage as well as execution time.
Further optimization can be achieved by omitting the quadrant numbers
altogether and rather handling the cases in which the winding number changes
directly. All edges ei that relate to these cases have in common that one of
their endpoints lies strictly below the horizontal line through R and the other
one above or on it (cf. Fig. 5). After using this property as an initial test to
distinguish edges that might contribute to a change in the winding number from
those who certainly do not, the edge constellations shown in Fig. 7 remain. To
further reject those edges that do not modify the winding number, the following
observation can be used. Whenever the determinant does not have the same
sign as the difference Py
i+1 −Py
i , the intersection of the edge with the horizontal
line is on the left side of R and the winding number remains unchanged. For the
residual edges, the edge direction decides the sign of the modification: if the edge
crosses the horizontal line from below (Py
i+1 Py
i ), then the winding number
is increased, otherwise it is decreased. These considerations are summarized in
Algo. 4, which is 20 % faster than Algo. 2 with the optimal classification scheme
(Algo. 3). This code can be abridged a lot by using the following macros which
may seem a little cryptic at first sight:
crossing : (Py
i Ry
) = (Py
i+1 Ry
),
right crossing : (det (i) 0) = (Py
i+1 Py
i ),
modify ω : ω = ω + 2 ∗ (Py
i+1 Py
i ) − 1.
They can be used to rewrite Algo. 4 with the following lines
ω = 0
for i = 0 to n − 1
if crossing
if right crossing
modify ω
return ω
Algorithm 5: Algo. 4 with macros.
and accelerate it by more than 30 %.
The last improvement on the winding number algorithm can be made by
avoiding the rather costly procedure of computing the determinant whenever
possible. In Algo. 5 the determinant computation is invoked for all edges that
pass the crossing test in order to find out whether they intersect the horizontal
line to the left or to the right of R. But for some of these edges this decision
can be made in a simpler way. Referring to Fig. 7, edges like the leftmost one
(Px
i Rx
and Px
i+1 ≤ Rx
) never change the winding number, whereas those
similar to the rightmost one (Px
i ≥ Rx
and Px
i+1 Rx
) always do. Both cases
9
10. R
Figure 7: Edges that fulfill the crossing condition.
ω = 0
for i = 0 to n − 1
if crossing
if Px
i ≥ Rx
if Px
i+1 Rx
modify ω
else
if right crossing
modify ω
else
if Px
i+1 Rx
if right crossing
modify ω
return ω
Algorithm 6: Efficient standard algorithm.
can be detected by comparisons and only the edges in the middle require the
evaluation of the det function to decide whether they affect the winding number
or not. This observation has been realized in the efficient standard algorithm
(Algo. 6). Assuming uniformly distributed polygon vertices, the probability
of occurance of the different edge types is 25 % for the leftmost, 25 % for the
rightmost case and 50 % for the edges shown in the middle. Therefore Algo. 6
evaluates the det function only every second time on average. This improvement
is traded in for two extra comparisons and decreases the computational costs
by approximately 5 %.
Now we will show that only minor modifications of the efficient standard
algorithm are necessary in order to handle special cases. First of all, most cases
of edge coincidences can be detected by using the modified det function in
the right crossing condition. Only the case of R lying on a horizontal edge
(leftmost case in Fig. 8) cannot be recognized this way because this constellation
R R R R
Figure 8: Different cases of edge coincidences.
10
11. if Py
0 = Ry
and Px
0 = Rx
exit vertex code
ω = 0
for i = 0 to n − 1
if Py
i+1 = Ry
if Px
i+1 = Rx
exit vertex code
else
if Py
i = Ry
and (Px
i+1 Rx
) = (Px
i Rx
)
exit edge code
if crossing
if Px
i ≥ Rx
if Px
i+1 Rx
modify ω
else
if right crossing
modify ω
else
if Px
i+1 Rx
if right crossing
modify ω
return ω
Algorithm 7: Efficient boundary algorithm.
does not pass the crossing condition and can therefore never reach the call
of the det function. The other special case, R being identical to one of the
vertices Pi, can be detected by checking the equality of both coordinates as
in the boundary version of Algo. 1. The resulting code is shown in Algo. 7.
Note that the vertex coincidence test is prior to the investigation of possible ray
intersections, because an edge intersection could be wrongly detected otherwise.
4 Evaluation
The timings reported in this section refer to an implementation in C on a
195MHz SGI R10000 with 128MB of memory, but similar results were obtained
by using Pascal and a Pentium PC with 233Mhz and 64MB of memory. We
generated 1000 polygons for different values of n and determined the winding
numbers of 1000 reference points for each of these polygons, thus calling the
winding number algorithm one million times. The vertices of the polygons as
well as the reference points were chosen randomly within the integer square
[−100, 100] × [−100, 100].
Fig. 9 shows that the runtime of the standard algorithms that do not take
the special cases into account grows linear with the number of vertices, thus
confirming the O(n) complexity of the problem. In contrast, the boundary
algorithms behave different (see Fig. 10). As the number of vertices grows, the
probability of the reference point to lie on the boundary of the polygon increases.
At the same time the chances of the boundary algorithm to exit earlier with the
11
12. 1
10
2
10
3
10
4
10
5
10
3 10 2
10 3
10 4
10 5
10
time
in
sec.
no. of vertices
Algo. 1
Algo. 2
Algo. 4
Algo. 5
Algo. 6
Figure 9: Execution times of the algorithms that do not handle the special cases.
1
10
2
10
3
10
4
10
5
10
3 10 2
10 3
10 4
10 5
10
time
in
sec.
no. of vertices
Algo. 2
with opt. classif.
boundary version
with opt. classif.
Algo. 6
Algo. 7
Figure 10: Execution times of the algorithms that handle the special cases.
detection of a vertex or an edge coincidence rise and therefore the algorithms
do not need to run through the whole loop over the polygon’s edges in many
cases. Of course, this effect is much less perceivable if the reference points are
chosen from a larger domain than the polygon vertices or if the coordinates are
floating-point values. Fig. 11 summarizes the timing results of all algorithms
for the special choice of n = 10.
12
13. 0 1 2 3 4 5 6
time in sec.
first version (Algo. 1) 6.106
with optimal classification 4.140
modified version (Algo. 2) 5.705
with optimal classification 3.806
boundary version 6.128
with optimal classification 3.951
without quadrants (Algo. 4) 3.030
with macros (Algo. 5) 1.990
efficient standard (Algo. 6) 1.896
efficient boundary (Algo. 7) 2.362
Figure 11: Execution times of all algorithms for n = 10.
All algorithms presented in this paper can easily be modified to give the
result of the even-odd rule instead of the winding number by replacing every
statement that modifies ω, especially the macro modify ω, with ω = 1 − ω.
This simplification saves about 5 % of the computation time.
Furthermore we would like to mention that the algorithms can be speeded
up considerably by comparing the reference point with the polygon’s bounding
box first, as it is done e. g. in the point in polygon algorithm of the C++ library
LEDA [6]. The additional costs of the bounding box determination and the test
itself pay off after just a few ( 10) tests, the precise number depending on the
number of vertices as well as the size of the reference point domain compared
to the size of the polygon.
We conclude this section by comparing our algorithms to those found in lit-
erature. The most thorough comparison of different point in polygon strategies
was probably made by Haines in [4], with the result that the ray-crossing strat-
egy performs best “if no preprocessing nor extra storage is available”. Taking a
close look at his ray-crossing algorithm it is very similar to Algo. 6, except that
he uses a different test for determining whether an edge crosses the ray to the
right. He directly computes the x coordinate of the intersection and compares
it to Rx
:
right crossing’ : Px
i+1 − (Py
i+1 − Ry
) ∗ (Px
i+1 − Px
i )/(Py
i+1 − Py
i ) Rx
.
Note that the case of Py
i+1 = Py
i , which would cause a division by zero, never
passes the prior crossing test. We found this version to be approximately 8 %
slower than the right crossing condition in our testing environment, which
is probably due to the division operation. This observation corresponds with
Haines’ comments on a modified version of his algorithm [3] where he uses the
right crossing condition. At the same time he omits the if-statements that
filter unnecessary evaluations of this condition, making that modified version
identical to Algo. 5.
The right crossing’ condition has also been used in an implementation
by Franklin [2] which is otherwise identical to Algo. 5, as are the algorithm in
the LEDA library [6] and the implementation by Stein [11] except that they
swap Pi and Pi+1 if necessary so that they can always assume Py
i+1 Py
i which
simplifies the right crossing condition to (det (i) 0) but is about 20 %
13
14. slower in total. Finally, the code given by O’Rourke [8] resembles Algo. 4 with a
right crossing’ condition and he gives further optimization ideas as exercises
which will eventually lead to Algo. 5.
5 Conclusion
We have presented a detailed discussion of the point in polygon problem for
arbitrary polygons. This problem is well known and has been discussed in
many books and papers before. Most of the authors distinguish between two
concepts for solving this problem: the even-odd or parity rule and the nonzero
winding number. We have shown by mathematical means that both concepts
are the same in principle and that the concept of winding numbers encompasses
the even-odd idea.
Furthermore we have developed an algorithm for the determination of the
winding number and have improved it step by step up to a very efficient imple-
mentation. We have compared our algorithms to those found in literature and
can summarize that in our testing environment Algo. 6 performed best although
we admit that Algo. 5 and the implementations in [2, 3, 4] were so close that
they might be faster for different machine architectures and compilers. However,
a definite advantage of our approach is that it can easily be extended to handle
the special case of R lying on the boundary of P (Algo. 7), an issue that was
otherwise taken care of only in [6] and [8], leading to much slower algorithms.
References
[1] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes. Computer Graph-
ics: Principles and Practice. Addison-Wesley, 2nd edition, 1990.
[2] R. Franklin. pnpoly.
http://www.ecse.rpi.edu/Homepages/wrf/geom/pnpoly.html.
[3] E. Haines. CrossingsMultiplyTest.
http://www.acm.org/tog/GraphicsGems/gemsiv/ptpoly haines/ptinpoly.c.
[4] E. Haines. Point in polygon strategies. In P. Heckbert, editor, Graphic
Gems IV, pages 24–46. Academic Press, Boston, MA, 1994.
[5] S. Harrington. Computer Graphics: A Programming Approach. McGraw-
Hill, 1983.
[6] K. Mehlhorn and S. Näher. LEDA: A Platform for Combinatorial and
Geometric Computing. Cambridge University Press, 1999.
[7] J. Nievergelt and K. Hinrichs. Algorithms and Data Structures: With Ap-
plications to Graphics and Geometry. Prentice-Hall, 1993.
[8] J. O’Rourke. Computational Geometry in C. Cambridge University Press,
2nd edition, 1998.
[9] D. F. Rogers. Procedural Elements for Computer Graphics. McGraw-Hill,
1985.
14
15. [10] R. Sedgewick. Algorithms. Addison-Wesley, 2nd edition, 1988.
[11] B. Stein. A point about polygons. Linux Journal, 35, March 1997.
[12] K. Weiler. An incremental angle point in polygon test. In P. Heckbert,
editor, Graphic Gems IV, pages 16–23. Academic Press, Boston, MA, 1994.
[13] M. Woo, J. Neider, and T. Davis. OpenGL programming guide. Addison-
Wesley, 2nd edition, 1997.
Appendix A
Let R = (0, 0)T
, P = (Px, Py)T
, and Q = (Qx, Qy)T
be the vertices of a planar
triangle and α, β, γ the angles of that triangle at R, P, and Q, resp. Then
cos α =
P|Q
P Q
, cot β =
P − Q|P
|D|
, cot γ =
Q − P|Q
|D|
with D = PxQy − QxPy and for the linear curve (x(t), y(t))T
= tQ + (1 − t)P,
t ∈ [0, 1] the following equations hold:
1
0
ẏ(t)x(t) − y(t)ẋ(t)
x(t)2 + y(t)2
dt =
1
0
D
t2Q − P|Q − P + 2tQ − P|P + P|P
dt
= arctan
Q − P|Q
D
+ arctan
P − Q|P
D
= sign (D) (arctan cot γ + arctan cot β)
= sign (D) (π − γ − β)
= sign (D) α.
15