SlideShare a Scribd company logo
1 of 21
Quang-Huy Tran
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: huytran1126@gmail.com
2024-04-29
Spatio-Temporal Graph Neural Point
Process for Traffic Congestion Event
Prediction
Guangyin Jin et al.
AAAI’37: 2023 Conference on Artificial Intelligence
2
OUTLINE
• MOTIVATION
• INTRODUCTION
• PROBLEM FORMULATION
• METHODOLOGY
• EXPERIMENT & RESULT
• CONCLUSION
3
MOTIVATION
• Traffic congestion is one of the most serious problems in urban management.
• Traffic congestion is a continuous process from generation to dissipation.
o Individual congestion event: occurrence time and duration.
o Meaningful for prediction to improve the traffic management and scheduling.
 when the next congestion event occur.
 how long it will last.
Traffic congestion overview
• Previous have disadvantages:
o The conventional methods only model dense variables like road, sparse like congestion not done.
o support the prediction in the given future time window (short time), not suitable for congestion
(long time).
4
MOTIVATION
• An appropriate framework for sparse event prediction in continuous-time.
Neural Point Process
• Challenges:
o 1) How to effectively capture the spatio-temporal
dependencies inroad networks?.
o 2) How to effectively model the continuous and
instantaneous temporal dynamics simultaneously for
each road?
• Probabilistic models of variable-length
point sequences observed on the real half-
line—here interpreted as arrival times of
events.
5
INTRODUCTION
• Propose a novel model named Spatio-Temporal Graph Neural Point Process (STGNPP)
for traffic congestion event prediction.
o Transformer and Graph Convolution Network (GCN) to jointly capture the spatio-temporal
dependencies from traffic states data.
o Extract the contextual link representations to incorporate with congestion event information for
modeling the history of the point process.
• To encode the hidden evolution patterns of each road
• present a novel continuous Gated Recurrent Unit (GRU) layer with neural flow
architecture.
• First work to propose spatio-temporal graph neural point process.
6
METHODOLOGY
Task definition
• A road network with 𝑁 links 𝑉 𝑉 = 𝑁 as a graph G = V, E, A
• Traffic states 𝑋𝑛(eg., link speed) on each link 𝑉
𝑛 are dense features in the snapshots of
certain time granularity.
• Given a fixed-length historical time window T for each sample:
o predict the occurrence time and duration of the next congestion event.
• Sequential congestion events 𝑆𝑛 = {𝑠𝑛,𝑖} 𝑖 = 1, 2, . . . , 𝑆𝑛 :
o Link 𝑉
𝑛 has 𝑠𝑛,𝑖 = 𝑡𝑛,𝑖, 𝑑𝑛,𝑖 .
o 𝑡𝑛,𝑖: occurrence time.
o 𝑑𝑛,𝑖: duration.
7
METHODOLOGY
Point Process Distribution
• Stochastic process to simulate the sequential events in a given observation time
interval [0, 𝑇]
• Time point is given:
• Intensity function of events at time point 𝑡 depended on the historical sequential
events 𝐻𝑡 up to 𝑡:
• Probability density function to observe an event sequence { }
𝑡𝑖 𝑖=1
𝑛
= 1, 𝜏 inter-event time:
8
METHODOLOGY
Overall Architecture
9
METHODOLOGY
Spatio-Temporal Graph Learning Module
• Link-wise Transformer layer.
• Graph convolution layer.
• Spatio-temporal inquirer.
• First, a fully connected layer to map the
historical traffic states into high-
dimensional representation.
10
METHODOLOGY
Link-Wise Transformer Layer
• Self-attention network: Employ trigonometric functions-based position encoding
method.
where 𝑄, 𝐾, and 𝑉 are query, key, and value matrices obtained by three linear
transformations 𝑊𝑄, 𝑊𝐾 , 𝑊𝑉 ∈ ℝ𝐷×𝐷
, 𝐷 are dimension.
• Pass into two-layer position-wise feed-forward neural network.
𝑀𝐷: mask operation that sets the value of the upper triangle of attention matrix
to 0
11
METHODOLOGY
Graph Convolution Layer - Spatio-temporal Inquirer
• Simple graph convolution operation with mix-hop aggregation.
where A is the normalized predefined adjacency matrix, 𝛼1, 𝛼2 ∈ ℝ𝑁 ×𝐷′
𝐷′ ≪ 𝑁 are
two learnable matrices, Θ𝑖 is learnable weight for each convolution layer.
• Select corresponding hidden representations based on indexes.
• Obtain those representation using sum aggregation and zero
padding.
12
METHODOLOGY
Congestion Event Learning Module – Continuous GRU Layer
• Congestion event representation
where denotes the historical duration of each congestion event after zero padding.
• Insight: the traffic state for each link is a combination of continuous changes and
instantaneous change.
• Apply ODE-based(ordinary differential
equations):
where 𝜙 𝑡 is a continuous function satisfies 2
properties: i) 𝜙 0 = 0 and ii) 𝜙 𝑡 < 1, Γ 𝑡, 𝑥 is
an arbitrary contractive neural network.
13
METHODOLOGY
Congestion Event Learning Module – Continuous GRU Layer
• Apply GRU-ODE
• Continuous GRU.
• Instantaneous dynamics GRU.
14
METHODOLOGY
Optimization and Prediction
• Optimizes the negative log-likelihood of the probability density function of the inter-
event time and the absolute error of the duration prediction:
where 𝑓𝑑 · denotes the fully connected layer for duration prediction of the next traffic
congestion, 𝛼 denotes the tradeoff ratio.
• Intensity Function Network: To approximate the distribution of inter-event time and
characterize the effect of periodic patterns of congestion, a periodic gated unit to
adjust the intensity function is defined:
15
EXPERIMENT AND RESULT
EXPERIMENT
• Measurement:
o Mean Absolute Errors (MAE).
o Negative log-likelihood (NLL).
• Dataset: Amap application
o Beijing and Chengdu.
o interevent times, duration and periodic features.
• Task:
o Predict link condition in next 6 hours.
16
• Baseline:
o Simple model: Historical Average (HA), Gradient Boosting Decision Tree(GBDT)[1], Gate Recurrent Unit
(GRU)[2].
o Spatio-temporal GNN: DCRNN[3], GraphWaveNet[4], STGODE [5].
o Neural point-process model: NHTPP [6], RMTPP [7], THPP [8], FNN-TPP [9].
EXPERIMENT AND RESULT
EXPERIMENT
[1] Ye, J., Chow, J. H., Chen, J., & Zheng, Z. (2009, November). Stochastic gradient boosted distributed decision trees. In Proceedings of the 18th ACM conference on Information and knowledge management (pp. 2061-2064)..
[2] Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
[3] Li, Y., Yu, R., Shahabi, C., & Liu, Y. (2017). Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926.
[4] Wu, Z., Pan, S., Long, G., Jiang, J., & Zhang, C. (2019). Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121.
[5] Fang, Z., Long, Q., Song, G., & Xie, K. (2021, August). Spatial-temporal graph ode networks for traffic flow forecasting. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 364-373).
[6] Mei, H., & Eisner, J. M. (2017). The neural hawkes process: A neurally self-modulating multivariate point process. Advances in neural information processing systems, 30.
[7] Du, N., Dai, H., Trivedi, R., Upadhyay, U., Gomez-Rodriguez, M., & Song, L. (2016, August). Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD international conference on knowled
ge discovery and data mining (pp. 1555-1564).
[8] Zuo, S., Jiang, H., Li, Z., Zhao, T., & Zha, H. (2020, November). Transformer hawkes process. In International conference on machine learning (pp. 11692-11702). PMLR.
[9] Omi, T., & Aihara, K. (2019). Fully neural network based model for general temporal point processes. Advances in neural information processing systems, 32.
17
EXPERIMENT AND RESULT
RESULT – Overall Performance
18
EXPERIMENT AND RESULT
RESULT – Ablation study and Parameter study
19
CONCLUSION
• Propose a novel spatio-temporal graph neural point process framework for traffic
congestion event prediction.
o utilize the spatiotemporal graph to incorporate with neural point process for traffic congestion
event modeling.
o consider periodic features, continuous and instantaneous dynamics to improve the inter-event
dependencies learning.
• Experiment shows that the proposed demonstrate the superiority compared with
other traditional methods.
[20240429_LabSeminar_Huy]Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction.pptx
[20240429_LabSeminar_Huy]Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction.pptx

More Related Content

Similar to [20240429_LabSeminar_Huy]Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction.pptx

Similar to [20240429_LabSeminar_Huy]Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction.pptx (20)

System for Prediction of Non Stationary Time Series based on the Wavelet Radi...
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...System for Prediction of Non Stationary Time Series based on the Wavelet Radi...
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...
 
Traffic Prediction from Street Network images.pptx
Traffic Prediction from  Street Network images.pptxTraffic Prediction from  Street Network images.pptx
Traffic Prediction from Street Network images.pptx
 
Efficiency of recurrent neural networks for seasonal trended time series mode...
Efficiency of recurrent neural networks for seasonal trended time series mode...Efficiency of recurrent neural networks for seasonal trended time series mode...
Efficiency of recurrent neural networks for seasonal trended time series mode...
 
[20240506_LabSeminar_Huy]Conditional Local Convolution for Spatio-Temporal Me...
[20240506_LabSeminar_Huy]Conditional Local Convolution for Spatio-Temporal Me...[20240506_LabSeminar_Huy]Conditional Local Convolution for Spatio-Temporal Me...
[20240506_LabSeminar_Huy]Conditional Local Convolution for Spatio-Temporal Me...
 
2006.11583.pdf
2006.11583.pdf2006.11583.pdf
2006.11583.pdf
 
Graph Evolution Models
Graph Evolution ModelsGraph Evolution Models
Graph Evolution Models
 
Linear Regression Model Fitting and Implication to Self Similar Behavior Traf...
Linear Regression Model Fitting and Implication to Self Similar Behavior Traf...Linear Regression Model Fitting and Implication to Self Similar Behavior Traf...
Linear Regression Model Fitting and Implication to Self Similar Behavior Traf...
 
NS-CUK Seminar: S.T.Nguyen, Review on "Continuous-Time Sequential Recommendat...
NS-CUK Seminar: S.T.Nguyen, Review on "Continuous-Time Sequential Recommendat...NS-CUK Seminar: S.T.Nguyen, Review on "Continuous-Time Sequential Recommendat...
NS-CUK Seminar: S.T.Nguyen, Review on "Continuous-Time Sequential Recommendat...
 
Clustering big spatiotemporal interval data
Clustering big spatiotemporal interval dataClustering big spatiotemporal interval data
Clustering big spatiotemporal interval data
 
Prediction of nodes mobility in 3-D space
Prediction of nodes mobility in 3-D space Prediction of nodes mobility in 3-D space
Prediction of nodes mobility in 3-D space
 
The Application of Wavelet Neural Network in the Settlement Monitoring of Subway
The Application of Wavelet Neural Network in the Settlement Monitoring of SubwayThe Application of Wavelet Neural Network in the Settlement Monitoring of Subway
The Application of Wavelet Neural Network in the Settlement Monitoring of Subway
 
Combinatorial optimization and deep reinforcement learning
Combinatorial optimization and deep reinforcement learningCombinatorial optimization and deep reinforcement learning
Combinatorial optimization and deep reinforcement learning
 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System
A Novel Neuroglial Architecture for Modelling Singular Perturbation System  A Novel Neuroglial Architecture for Modelling Singular Perturbation System
A Novel Neuroglial Architecture for Modelling Singular Perturbation System
 
Semantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite ImagerySemantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite Imagery
 
[Seminar] hyunwook 0624
[Seminar] hyunwook 0624[Seminar] hyunwook 0624
[Seminar] hyunwook 0624
 
Multi-task learning using non-linear autoregressive models and recurrent neur...
Multi-task learning using non-linear autoregressive models and recurrent neur...Multi-task learning using non-linear autoregressive models and recurrent neur...
Multi-task learning using non-linear autoregressive models and recurrent neur...
 
mathematics-10-01599-v2.pdf
mathematics-10-01599-v2.pdfmathematics-10-01599-v2.pdf
mathematics-10-01599-v2.pdf
 
Traffic models and estimation
Traffic models and estimation Traffic models and estimation
Traffic models and estimation
 
A Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather ForecastingA Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather Forecasting
 
Forecasting of electric consumption in a semiconductor plant using time serie...
Forecasting of electric consumption in a semiconductor plant using time serie...Forecasting of electric consumption in a semiconductor plant using time serie...
Forecasting of electric consumption in a semiconductor plant using time serie...
 

More from thanhdowork

More from thanhdowork (20)

[20240520_LabSeminar_Huy]DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural...
[20240520_LabSeminar_Huy]DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural...[20240520_LabSeminar_Huy]DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural...
[20240520_LabSeminar_Huy]DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural...
 
240520_Thanh_LabSeminar[G-MSM: Unsupervised Multi-Shape Matching with Graph-b...
240520_Thanh_LabSeminar[G-MSM: Unsupervised Multi-Shape Matching with Graph-b...240520_Thanh_LabSeminar[G-MSM: Unsupervised Multi-Shape Matching with Graph-b...
240520_Thanh_LabSeminar[G-MSM: Unsupervised Multi-Shape Matching with Graph-b...
 
240513_Thanh_LabSeminar[Learning and Aggregating Lane Graphs for Urban Automa...
240513_Thanh_LabSeminar[Learning and Aggregating Lane Graphs for Urban Automa...240513_Thanh_LabSeminar[Learning and Aggregating Lane Graphs for Urban Automa...
240513_Thanh_LabSeminar[Learning and Aggregating Lane Graphs for Urban Automa...
 
240513_Thuy_Labseminar[Universal Prompt Tuning for Graph Neural Networks].pptx
240513_Thuy_Labseminar[Universal Prompt Tuning for Graph Neural Networks].pptx240513_Thuy_Labseminar[Universal Prompt Tuning for Graph Neural Networks].pptx
240513_Thuy_Labseminar[Universal Prompt Tuning for Graph Neural Networks].pptx
 
240506_JW_labseminar[Structural Deep Network Embedding].pptx
240506_JW_labseminar[Structural Deep Network Embedding].pptx240506_JW_labseminar[Structural Deep Network Embedding].pptx
240506_JW_labseminar[Structural Deep Network Embedding].pptx
 
240506_Thanh_LabSeminar[ASG2Caption].pptx
240506_Thanh_LabSeminar[ASG2Caption].pptx240506_Thanh_LabSeminar[ASG2Caption].pptx
240506_Thanh_LabSeminar[ASG2Caption].pptx
 
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
240506_Thuy_Labseminar[GraphPrompt: Unifying Pre-Training and Downstream Task...
 
240429_Thanh_LabSeminar[TranSG: Transformer-Based Skeleton Graph Prototype Co...
240429_Thanh_LabSeminar[TranSG: Transformer-Based Skeleton Graph Prototype Co...240429_Thanh_LabSeminar[TranSG: Transformer-Based Skeleton Graph Prototype Co...
240429_Thanh_LabSeminar[TranSG: Transformer-Based Skeleton Graph Prototype Co...
 
240429_Thuy_Labseminar[Simplifying and Empowering Transformers for Large-Grap...
240429_Thuy_Labseminar[Simplifying and Empowering Transformers for Large-Grap...240429_Thuy_Labseminar[Simplifying and Empowering Transformers for Large-Grap...
240429_Thuy_Labseminar[Simplifying and Empowering Transformers for Large-Grap...
 
240422_Thanh_LabSeminar[Dynamic Graph Enhanced Contrastive Learning for Chest...
240422_Thanh_LabSeminar[Dynamic Graph Enhanced Contrastive Learning for Chest...240422_Thanh_LabSeminar[Dynamic Graph Enhanced Contrastive Learning for Chest...
240422_Thanh_LabSeminar[Dynamic Graph Enhanced Contrastive Learning for Chest...
 
240422_Thuy_Labseminar[Large Graph Property Prediction via Graph Segment Trai...
240422_Thuy_Labseminar[Large Graph Property Prediction via Graph Segment Trai...240422_Thuy_Labseminar[Large Graph Property Prediction via Graph Segment Trai...
240422_Thuy_Labseminar[Large Graph Property Prediction via Graph Segment Trai...
 
240315_Thanh_LabSeminar[G-TAD: Sub-Graph Localization for Temporal Action Det...
240315_Thanh_LabSeminar[G-TAD: Sub-Graph Localization for Temporal Action Det...240315_Thanh_LabSeminar[G-TAD: Sub-Graph Localization for Temporal Action Det...
240315_Thanh_LabSeminar[G-TAD: Sub-Graph Localization for Temporal Action Det...
 
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
240415_Thuy_Labseminar[Simple and Asymmetric Graph Contrastive Learning witho...
 
240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptx240115_Attention Is All You Need (2017 NIPS).pptx
240115_Attention Is All You Need (2017 NIPS).pptx
 
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
240115_Thanh_LabSeminar[Don't walk, skip! online learning of multi-scale netw...
 
240122_Attention Is All You Need (2017 NIPS)2.pptx
240122_Attention Is All You Need (2017 NIPS)2.pptx240122_Attention Is All You Need (2017 NIPS)2.pptx
240122_Attention Is All You Need (2017 NIPS)2.pptx
 
240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...
240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...
240226_Thanh_LabSeminar[Structure-Aware Transformer for Graph Representation ...
 
[20240304_LabSeminar_Huy]DeepWalk: Online Learning of Social Representations....
[20240304_LabSeminar_Huy]DeepWalk: Online Learning of Social Representations....[20240304_LabSeminar_Huy]DeepWalk: Online Learning of Social Representations....
[20240304_LabSeminar_Huy]DeepWalk: Online Learning of Social Representations....
 
240304_Thanh_LabSeminar[Pure Transformers are Powerful Graph Learners].pptx
240304_Thanh_LabSeminar[Pure Transformers are Powerful Graph Learners].pptx240304_Thanh_LabSeminar[Pure Transformers are Powerful Graph Learners].pptx
240304_Thanh_LabSeminar[Pure Transformers are Powerful Graph Learners].pptx
 
240304_Thuy_Labseminar[SimGRACE: A Simple Framework for Graph Contrastive Lea...
240304_Thuy_Labseminar[SimGRACE: A Simple Framework for Graph Contrastive Lea...240304_Thuy_Labseminar[SimGRACE: A Simple Framework for Graph Contrastive Lea...
240304_Thuy_Labseminar[SimGRACE: A Simple Framework for Graph Contrastive Lea...
 

Recently uploaded

會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
中 央社
 

Recently uploaded (20)

BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
BỘ LUYỆN NGHE TIẾNG ANH 8 GLOBAL SUCCESS CẢ NĂM (GỒM 12 UNITS, MỖI UNIT GỒM 3...
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024
 
An overview of the various scriptures in Hinduism
An overview of the various scriptures in HinduismAn overview of the various scriptures in Hinduism
An overview of the various scriptures in Hinduism
 
Đề tieng anh thpt 2024 danh cho cac ban hoc sinh
Đề tieng anh thpt 2024 danh cho cac ban hoc sinhĐề tieng anh thpt 2024 danh cho cac ban hoc sinh
Đề tieng anh thpt 2024 danh cho cac ban hoc sinh
 
Supporting Newcomer Multilingual Learners
Supporting Newcomer  Multilingual LearnersSupporting Newcomer  Multilingual Learners
Supporting Newcomer Multilingual Learners
 
Climbers and Creepers used in landscaping
Climbers and Creepers used in landscapingClimbers and Creepers used in landscaping
Climbers and Creepers used in landscaping
 
Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
 
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
 
ANTI PARKISON DRUGS.pptx
ANTI         PARKISON          DRUGS.pptxANTI         PARKISON          DRUGS.pptx
ANTI PARKISON DRUGS.pptx
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
The Liver & Gallbladder (Anatomy & Physiology).pptx
The Liver &  Gallbladder (Anatomy & Physiology).pptxThe Liver &  Gallbladder (Anatomy & Physiology).pptx
The Liver & Gallbladder (Anatomy & Physiology).pptx
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptx
 
e-Sealing at EADTU by Kamakshi Rajagopal
e-Sealing at EADTU by Kamakshi Rajagopale-Sealing at EADTU by Kamakshi Rajagopal
e-Sealing at EADTU by Kamakshi Rajagopal
 

[20240429_LabSeminar_Huy]Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction.pptx

  • 1. Quang-Huy Tran Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: huytran1126@gmail.com 2024-04-29 Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction Guangyin Jin et al. AAAI’37: 2023 Conference on Artificial Intelligence
  • 2. 2 OUTLINE • MOTIVATION • INTRODUCTION • PROBLEM FORMULATION • METHODOLOGY • EXPERIMENT & RESULT • CONCLUSION
  • 3. 3 MOTIVATION • Traffic congestion is one of the most serious problems in urban management. • Traffic congestion is a continuous process from generation to dissipation. o Individual congestion event: occurrence time and duration. o Meaningful for prediction to improve the traffic management and scheduling.  when the next congestion event occur.  how long it will last. Traffic congestion overview • Previous have disadvantages: o The conventional methods only model dense variables like road, sparse like congestion not done. o support the prediction in the given future time window (short time), not suitable for congestion (long time).
  • 4. 4 MOTIVATION • An appropriate framework for sparse event prediction in continuous-time. Neural Point Process • Challenges: o 1) How to effectively capture the spatio-temporal dependencies inroad networks?. o 2) How to effectively model the continuous and instantaneous temporal dynamics simultaneously for each road? • Probabilistic models of variable-length point sequences observed on the real half- line—here interpreted as arrival times of events.
  • 5. 5 INTRODUCTION • Propose a novel model named Spatio-Temporal Graph Neural Point Process (STGNPP) for traffic congestion event prediction. o Transformer and Graph Convolution Network (GCN) to jointly capture the spatio-temporal dependencies from traffic states data. o Extract the contextual link representations to incorporate with congestion event information for modeling the history of the point process. • To encode the hidden evolution patterns of each road • present a novel continuous Gated Recurrent Unit (GRU) layer with neural flow architecture. • First work to propose spatio-temporal graph neural point process.
  • 6. 6 METHODOLOGY Task definition • A road network with 𝑁 links 𝑉 𝑉 = 𝑁 as a graph G = V, E, A • Traffic states 𝑋𝑛(eg., link speed) on each link 𝑉 𝑛 are dense features in the snapshots of certain time granularity. • Given a fixed-length historical time window T for each sample: o predict the occurrence time and duration of the next congestion event. • Sequential congestion events 𝑆𝑛 = {𝑠𝑛,𝑖} 𝑖 = 1, 2, . . . , 𝑆𝑛 : o Link 𝑉 𝑛 has 𝑠𝑛,𝑖 = 𝑡𝑛,𝑖, 𝑑𝑛,𝑖 . o 𝑡𝑛,𝑖: occurrence time. o 𝑑𝑛,𝑖: duration.
  • 7. 7 METHODOLOGY Point Process Distribution • Stochastic process to simulate the sequential events in a given observation time interval [0, 𝑇] • Time point is given: • Intensity function of events at time point 𝑡 depended on the historical sequential events 𝐻𝑡 up to 𝑡: • Probability density function to observe an event sequence { } 𝑡𝑖 𝑖=1 𝑛 = 1, 𝜏 inter-event time:
  • 9. 9 METHODOLOGY Spatio-Temporal Graph Learning Module • Link-wise Transformer layer. • Graph convolution layer. • Spatio-temporal inquirer. • First, a fully connected layer to map the historical traffic states into high- dimensional representation.
  • 10. 10 METHODOLOGY Link-Wise Transformer Layer • Self-attention network: Employ trigonometric functions-based position encoding method. where 𝑄, 𝐾, and 𝑉 are query, key, and value matrices obtained by three linear transformations 𝑊𝑄, 𝑊𝐾 , 𝑊𝑉 ∈ ℝ𝐷×𝐷 , 𝐷 are dimension. • Pass into two-layer position-wise feed-forward neural network. 𝑀𝐷: mask operation that sets the value of the upper triangle of attention matrix to 0
  • 11. 11 METHODOLOGY Graph Convolution Layer - Spatio-temporal Inquirer • Simple graph convolution operation with mix-hop aggregation. where A is the normalized predefined adjacency matrix, 𝛼1, 𝛼2 ∈ ℝ𝑁 ×𝐷′ 𝐷′ ≪ 𝑁 are two learnable matrices, Θ𝑖 is learnable weight for each convolution layer. • Select corresponding hidden representations based on indexes. • Obtain those representation using sum aggregation and zero padding.
  • 12. 12 METHODOLOGY Congestion Event Learning Module – Continuous GRU Layer • Congestion event representation where denotes the historical duration of each congestion event after zero padding. • Insight: the traffic state for each link is a combination of continuous changes and instantaneous change. • Apply ODE-based(ordinary differential equations): where 𝜙 𝑡 is a continuous function satisfies 2 properties: i) 𝜙 0 = 0 and ii) 𝜙 𝑡 < 1, Γ 𝑡, 𝑥 is an arbitrary contractive neural network.
  • 13. 13 METHODOLOGY Congestion Event Learning Module – Continuous GRU Layer • Apply GRU-ODE • Continuous GRU. • Instantaneous dynamics GRU.
  • 14. 14 METHODOLOGY Optimization and Prediction • Optimizes the negative log-likelihood of the probability density function of the inter- event time and the absolute error of the duration prediction: where 𝑓𝑑 · denotes the fully connected layer for duration prediction of the next traffic congestion, 𝛼 denotes the tradeoff ratio. • Intensity Function Network: To approximate the distribution of inter-event time and characterize the effect of periodic patterns of congestion, a periodic gated unit to adjust the intensity function is defined:
  • 15. 15 EXPERIMENT AND RESULT EXPERIMENT • Measurement: o Mean Absolute Errors (MAE). o Negative log-likelihood (NLL). • Dataset: Amap application o Beijing and Chengdu. o interevent times, duration and periodic features. • Task: o Predict link condition in next 6 hours.
  • 16. 16 • Baseline: o Simple model: Historical Average (HA), Gradient Boosting Decision Tree(GBDT)[1], Gate Recurrent Unit (GRU)[2]. o Spatio-temporal GNN: DCRNN[3], GraphWaveNet[4], STGODE [5]. o Neural point-process model: NHTPP [6], RMTPP [7], THPP [8], FNN-TPP [9]. EXPERIMENT AND RESULT EXPERIMENT [1] Ye, J., Chow, J. H., Chen, J., & Zheng, Z. (2009, November). Stochastic gradient boosted distributed decision trees. In Proceedings of the 18th ACM conference on Information and knowledge management (pp. 2061-2064).. [2] Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [3] Li, Y., Yu, R., Shahabi, C., & Liu, Y. (2017). Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926. [4] Wu, Z., Pan, S., Long, G., Jiang, J., & Zhang, C. (2019). Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121. [5] Fang, Z., Long, Q., Song, G., & Xie, K. (2021, August). Spatial-temporal graph ode networks for traffic flow forecasting. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 364-373). [6] Mei, H., & Eisner, J. M. (2017). The neural hawkes process: A neurally self-modulating multivariate point process. Advances in neural information processing systems, 30. [7] Du, N., Dai, H., Trivedi, R., Upadhyay, U., Gomez-Rodriguez, M., & Song, L. (2016, August). Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD international conference on knowled ge discovery and data mining (pp. 1555-1564). [8] Zuo, S., Jiang, H., Li, Z., Zhao, T., & Zha, H. (2020, November). Transformer hawkes process. In International conference on machine learning (pp. 11692-11702). PMLR. [9] Omi, T., & Aihara, K. (2019). Fully neural network based model for general temporal point processes. Advances in neural information processing systems, 32.
  • 17. 17 EXPERIMENT AND RESULT RESULT – Overall Performance
  • 18. 18 EXPERIMENT AND RESULT RESULT – Ablation study and Parameter study
  • 19. 19 CONCLUSION • Propose a novel spatio-temporal graph neural point process framework for traffic congestion event prediction. o utilize the spatiotemporal graph to incorporate with neural point process for traffic congestion event modeling. o consider periodic features, continuous and instantaneous dynamics to improve the inter-event dependencies learning. • Experiment shows that the proposed demonstrate the superiority compared with other traditional methods.

Editor's Notes

  1. Example of the traffic congestion features and linkspeed trends from the Beijing dataset we adopted in this paper. In sub-figure (a), we select the traffic congestion statistics of three neighbor links on 12 May 2021 to visualize theoccurrence time and duration in 24 hours. In sub-figure (b),we select the speed of link 1 from 7.am to 10.am on 12 May2021 to visualize the change trend.
  2. Type equation here.
  3. Type equation here.
  4. Zero padding is a technique typically employed to make the size of the input sequence equal to a power of two
  5. The green squares denote the moment when the congestion event occurred. The green curves and arrows represent continuous and instantaneous changes in the hidden representation of link states, which are learned by GRU flow and discrete GRU. grey strip denotes the input contextual information at each time step.
  6. The green squares denote the moment when the congestion event occurred. The green curves and arrows represent continuous and instantaneous changes in the hidden representation of link states, which are learned by GRU flow and discrete GRU. grey strip denotes the input contextual information at each time step.
  7. 𝑓 𝑙 + (·) denotes the fully connected layer for computing the basic intensity function, 𝑓 𝑝 (·) denotes the fully connected layer for periodic gated unit, 𝑃 𝑖 𝑑 , 𝑃 𝑖 𝑤 respectively denote the time of day and the day of week of the i-th event
  8. 1a : results of the analysis by reporting the median load profile for each cluster; shaded areas correspond to quantiles with 10% increments.