While considering the spatial and temporal features of traffic, capturing the impacts of various external factors on travel is an essential step towards achieving accurate traffic forecasting. However, existing studies seldom consider external factors or neglect the effect of the complex correlations among external factors on traffic. Intuitively, knowledge graphs can naturally describe these correlations. Since knowledge graphs and traffic networks are essentially heterogeneous networks, it is challenging to integrate the information in both networks. On this background, this study presents a knowledge representation-driven traffic forecasting method based on spatial-temporal graph convolutional networks.
In the field of artificial intelligence (AI), planning refers to the process of developing a sequence of actions or steps that an intelligent agent should take to achieve a specific goal or solve a particular problem. AI planning is a fundamental component of many AI systems and has applications in various domains, including robotics, autonomous systems, scheduling, logistics, and more. Here are some key aspects of planning in AI:
Definition of Planning: Planning involves defining a problem, specifying the initial state, setting a goal state, and finding a sequence of actions or a plan that transforms the initial state into the desired goal state while adhering to certain constraints.
State-Space Representation: In AI planning, the problem is often represented as a state-space, where each state represents a snapshot of the system, and actions transform one state into another. The goal is to find a path through this state-space from the initial state to the goal state.
Search Algorithms: AI planning typically relies on search algorithms to explore the state-space efficiently. Uninformed search algorithms, such as depth-first search and breadth-first search, can be used, as well as informed search algorithms, like A* search, which incorporates heuristics to guide the search.
Heuristics: Heuristics are used in planning to estimate the cost or distance from a state to the goal. Heuristic functions help inform the search algorithms by providing an estimate of how close a state is to the solution. Good heuristics can significantly improve the efficiency of the search.
Plan Execution: Once a plan is generated, the next step is plan execution, where the agent carries out the actions in the plan to achieve the desired goal. This often requires monitoring the environment to ensure that the actions are executed as planned.
Temporal and Hierarchical Planning: In more complex scenarios, temporal planning deals with actions that have temporal constraints, and hierarchical planning involves creating plans at multiple levels of abstraction, making planning more manageable in complex domains.
Partial and Incremental Planning: Sometimes, it may not be necessary to create a complete plan from scratch. Partial and incremental planning allows agents to adapt and modify existing plans to respond to changing circumstances.
Applications: Planning is used in a wide range of applications, from manufacturing and logistics (e.g., scheduling production and delivery) to robotics (e.g., path planning for robots) and game playing (e.g., chess and video games).
Challenges: Challenges in AI planning include dealing with large search spaces, handling uncertainty, addressing resource constraints, and optimizing plans for efficiency and performance.
AI planning is a critical component in creating intelligent systems that can autonomously make decisions and solve complex problems.
In the field of artificial intelligence (AI), planning refers to the process of developing a sequence of actions or steps that an intelligent agent should take to achieve a specific goal or solve a particular problem. AI planning is a fundamental component of many AI systems and has applications in various domains, including robotics, autonomous systems, scheduling, logistics, and more. Here are some key aspects of planning in AI:
Definition of Planning: Planning involves defining a problem, specifying the initial state, setting a goal state, and finding a sequence of actions or a plan that transforms the initial state into the desired goal state while adhering to certain constraints.
State-Space Representation: In AI planning, the problem is often represented as a state-space, where each state represents a snapshot of the system, and actions transform one state into another. The goal is to find a path through this state-space from the initial state to the goal state.
Search Algorithms: AI planning typically relies on search algorithms to explore the state-space efficiently. Uninformed search algorithms, such as depth-first search and breadth-first search, can be used, as well as informed search algorithms, like A* search, which incorporates heuristics to guide the search.
Heuristics: Heuristics are used in planning to estimate the cost or distance from a state to the goal. Heuristic functions help inform the search algorithms by providing an estimate of how close a state is to the solution. Good heuristics can significantly improve the efficiency of the search.
Plan Execution: Once a plan is generated, the next step is plan execution, where the agent carries out the actions in the plan to achieve the desired goal. This often requires monitoring the environment to ensure that the actions are executed as planned.
Temporal and Hierarchical Planning: In more complex scenarios, temporal planning deals with actions that have temporal constraints, and hierarchical planning involves creating plans at multiple levels of abstraction, making planning more manageable in complex domains.
Partial and Incremental Planning: Sometimes, it may not be necessary to create a complete plan from scratch. Partial and incremental planning allows agents to adapt and modify existing plans to respond to changing circumstances.
Applications: Planning is used in a wide range of applications, from manufacturing and logistics (e.g., scheduling production and delivery) to robotics (e.g., path planning for robots) and game playing (e.g., chess and video games).
Challenges: Challenges in AI planning include dealing with large search spaces, handling uncertainty, addressing resource constraints, and optimizing plans for efficiency and performance.
AI planning is a critical component in creating intelligent systems that can autonomously make decisions and solve complex problems.
Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that enables machines to understand human language. Its goal is to build systems that can make sense of the text and automatically perform tasks like translation, spell check, or topic classification
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Resource provisioning optimization in cloud computingMasoumeh_tajvidi
The key benefit of cloud computing for the customer is to be able to acquire resources in response to demand dynamically and only pay for the resources used. This benefit can only be realized when the cloud user can determine the right size of the resource required and allocate the resources in a cost-effective way. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. Unfortunately there is still a lack of a good understanding of such a cost optimization. Resource provisioning optimization problem from the cloud-user prospective is a complicated optimization problem that consists of much uncertainty as well as heterogeneity in parameters. Also this problem is a multi-objective problem in essence. There is not much research conducted for solving this problem as it is in the real-world. These works mostly relaxed the problem by not considering the dynamicity and heterogeneity of the environment, or solving the problem as a single-objective optimization. Therefore in this PhD research, our target is solving the resource provisioning optimization problem by taking into account most of the complexity of this problem in the real-world as well as proposing a smart approach for solving such an uncertain and heterogeneous multi-objective optimization problems. This smart approach is to be equipped with Machine learning (ML) techniques in order to make it an intelligent approach that learns from previous stages and makes more accurate decisions in later stages.
Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that enables machines to understand human language. Its goal is to build systems that can make sense of the text and automatically perform tasks like translation, spell check, or topic classification
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Resource provisioning optimization in cloud computingMasoumeh_tajvidi
The key benefit of cloud computing for the customer is to be able to acquire resources in response to demand dynamically and only pay for the resources used. This benefit can only be realized when the cloud user can determine the right size of the resource required and allocate the resources in a cost-effective way. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. Unfortunately there is still a lack of a good understanding of such a cost optimization. Resource provisioning optimization problem from the cloud-user prospective is a complicated optimization problem that consists of much uncertainty as well as heterogeneity in parameters. Also this problem is a multi-objective problem in essence. There is not much research conducted for solving this problem as it is in the real-world. These works mostly relaxed the problem by not considering the dynamicity and heterogeneity of the environment, or solving the problem as a single-objective optimization. Therefore in this PhD research, our target is solving the resource provisioning optimization problem by taking into account most of the complexity of this problem in the real-world as well as proposing a smart approach for solving such an uncertain and heterogeneous multi-objective optimization problems. This smart approach is to be equipped with Machine learning (ML) techniques in order to make it an intelligent approach that learns from previous stages and makes more accurate decisions in later stages.
Enhancement of Single Moving Average Time Series Model Using Rough k-Means fo...IJERA Editor
In the last decade, real–time audio and video services have gained much popularity, and now occupying a large portion of the total network traffic in the Internet. As the real-time services are becoming mainstream the demand for Quality of Service (QoS) is greater than ever before. It is necessary to use the network resources to the fullest, to satisfy the increasing demand for QoS. To solve this issue, we need to apply a prediction model for network traffic, on the basis of network management such as congestion control and bandwidth location. In this paper, we propose an integrated model that combines Rough K-Means (RKM) clustering with Single Moving Average (SMA) time series model to improve prediction loading packets of network traffic. The single moving average time series prediction model is used to predict loading of packets volume in real network traffic. Further, clustering granules obtained by using rough k-means is used to analyze the network data of each year separately. The proposed model is an integration of the prediction results that were obtained from conventional single moving average prediction model with centriods of clusters that obtained from rough k-means clustering. The model is evaluated using on line network traffic data that has been collected from WIDE backbone Network MSE, RMSE and MAPE metrics are used to examine the results of the integrated model. The experimental results show that the integrated model can be an effective way to improve prediction accuracy achieved with the help of rough k-means clustering. A Comparative result between conventional prediction model and our integrated model is presented.
DEEP LEARNING NEURAL NETWORK APPROACHES TO LAND USE-DEMOGRAPHIC- TEMPORAL BA...civejjour
Land use and transportation planning are inter-dependent, as well as being important factors in forecasting urban development. In recent years, predicting traffic based on land use, along with several other variables, has become a worthwhile area of study. In this paper, it is proposed that Deep Neural Network Regression (DNN-Regression) and Recurrent Neural Network (DNN-RNN) methods could be used to predict traffic. These methods used three key variables: land use, demographic and temporal data. The proposed methods were evaluated with other methods, using datasets collected from the City of Calgary, Canada. The proposed DNN-Regression focused on demographic and land use variables for traffic prediction. The study also predicted traffic temporally in the same geographical area by using DNN-RNN. The DNN-RNN used long short-term memory to predict traffic. Comparative experiments revealed that the proposed DNN-Regression and DNN-RNN models outperformed other methods.
Deep Learning Neural Network Approaches to Land Use-demographic- Temporal bas...civejjour
Land use and transportation planning are inter-dependent, as well as being important factors in
forecasting urban development. In recent years, predicting traffic based on land use, along with several
other variables, has become a worthwhile area of study. In this paper, it is proposed that Deep Neural
Network Regression (DNN-Regression) and Recurrent Neural Network (DNN-RNN) methods could be used
to predict traffic. These methods used three key variables: land use, demographic and temporal data. The
proposed methods were evaluated with other methods, using datasets collected from the City of Calgary,
Canada. The proposed DNN-Regression focused on demographic and land use variables for traffic
prediction. The study also predicted traffic temporally in the same geographical area by using DNN-RNN.
The DNN-RNN used long short-term memory to predict traffic. Comparative experiments revealed that the
proposed DNN-Regression and DNN-RNN models outperformed other methods.
Prediction of passenger train using fuzzy time series and percentage change m...journalBEEI
In the subject of railway operation, predicting railway passenger volume has always been a hot topic. Accurately forecasting railway passenger volume is the foundation for railway transportation companies to optimize transit efficiency and revenue. The goal of this research is to use a combination of the fuzzy time series approach based on the rate of change algorithm and the Holt double exponential smoothing method to forecast the number of train passengers. In contrast to prior investigations, we focus primarily on determining the next time period in this research. The fuzzy time series is employed as the forecasting basis, the rate of change is used to build the set of universes, and the Holt's double exponential smoothing method is utilized to forecast the following period in this case study. The number of railway passengers predicted for January 2020 is 38199, with a tiny average forecasting error rate of 0.89 percent and a mean square error of 131325. It can also help rail firms identify future passenger needs, which can be used to decide whether to expand train cars or run new trains, as well as how to distribute tickets.
Performance Simulation and Analysis for LTESystem Using Human Behavior Queue ...ijwmn
Understanding the nature of traffic has been a key concern of the researchers particularly over the last
two decades and it has been noticed through extensive high quality studies that traffic found in different
kinds of IP/wireless IP networks is human operators . Despite the recent findings of real time human
behavior in measured traffic from data networks, much of the current understanding of IP traffic
modeling is still based on simplistic probability distributed traffic. Unlike most existing studies that are
primarily based on simplistic probabilistic model and traditional scheduling algorithms, this research
presents an analytical performance model for real time human behavior queue systems with intelligent
task management traffic input scheduled by a novel and promising scheduling mechanism for 4G-LTE
system. Our proposed model is substantiated on human behavior queuing system that considers real time
of traffic exhibiting homogeneous tasks characteristics. We analyze the model on the basis of newly
proposed scheduling scheme for 4G-LTE system. We present closed form expressions of expected
response times for real time traffic classes. We develop a discrete event simulator to understand the
behavior of real time of arriving tasks traffic under this newly proposed scheduling mechanism for 4GLTE system . The results indicate that our proposed scheduling algorithm provides preferential treatment
to real-time applications such as voice and video but not to that extent that data applications are starving
for bandwidth and outperforms all other scheduling schemes that are available in the market.
Multi-task learning using non-linear autoregressive models and recurrent neur...IJECEIAES
Tide level forecasting plays an important role in environmental management and development. Current tide level forecasting methods are usually implemented for solving single task problems, that is, a model built based on the tide level data at an individual location is only used to forecast tide level of the same location but is not used for tide forecasting at another location. This study proposes a new method for tide level prediction at multiple locations simultaneously. The method combines nonlinear autoregressive moving average with exogenous inputs (NARMAX) model and recurrent neural networks (RNNs), and incorporates them into a multi-task learning (MTL) framework. Experiments are designed and performed to compare single task learning (STL) and MTL with and without using non-linear autoregressive models. Three different RNN variants, namely, long short- term memory (LSTM), gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM) are employed together with non-linear autoregressive models. A case study on tide level forecasting at many different geographical locations (5 to 11 locations) is conducted. Experimental results demonstrate that the proposed architectures outperform the classical single-task prediction methods.
Mobile elements scheduling for periodic sensor applicationsijwmn
In this paper, we investigate the problem of designing the mobile elements tours such that the length of each tour is below a per-determined length and the depth of the multi-hop routing trees bounded by k. The path of the mobile element is designed to visit subset of the nodes (cache points). These cache points store other nodes data. To address this problem, we propose two heuristic-based solutions. Our solutions take into consideration the distribution of the nodes during the establishment of the tour. The results of our experiments indicate that our schemes significantly outperforms the best comparable scheme in the literature.
Adaptive traffic lights based on traffic flow prediction using machine learni...IJECEIAES
Traffic congestion prediction is one of the essential components of intelligent transport systems (ITS). This is due to the rapid growth of population and, consequently, the high number of vehicles in cities. Nowadays, the problem of traffic congestion attracts more and more attention from researchers in the field of ITS. Traffic congestion can be predicted in advance by analyzing traffic flow data. In this article, we used machine learning algorithms such as linear regression, random forest regressor, decision tree regressor, gradient boosting regressor, and K-neighbor regressor to predict traffic flow and reduce traffic congestion at intersections. We used the public roads dataset from the UK national road traffic to test our models. All machine learning algorithms obtained good performance metrics, indicating that they are valid for implementation in smart traffic light systems. Next, we implemented an adaptive traffic light system based on a random forest regressor model, which adjusts the timing of green and red lights depending on the road width, traffic density, types of vehicles, and expected traffic. Simulations of the proposed system show a 30.8% reduction in traffic congestion, thus justifying its effectiveness and the interest of deploying it to regulate the signaling problem in intersections.
Path constrained data gathering scheme for wireless sensor networks with mobi...ijwmn
Wireless Sensor Networks (WSNs) have emerged as a promising solution for variety of applications.
Recently, in order to increase the lifetime of the network, many proposals have introduced the use of
Mobile Elements (MEs) as a mechanical carrier to collect data. In this paper, we investigate the problem of
designing the mobile element tour to visit subset of the nodes, termed as caching points, where the length of
the mobile element tour is bounded by pre-determined length. Caching can be implemented at various
points on the network such that any node in the network is at most k-hops away from one of these caching
points. To address this problem, we present heuristic-based solution. Our solution works by partitioning
the network such that the depth of each partition is bounded by k. Then, in each partition, the minimum
number of required caching points is identified. We compare the resulting performance of our algorithm
with the best known comparable schemes in the literature.
Traffic Congestion Prediction using Deep Reinforcement Learning in Vehicular ...IJCNCJournal
In recent years, a new wireless network called vehicular ad-hoc network (VANET), has become a popular research topic. VANET allows communication among vehicles and with roadside units by providing information to each other, such as vehicle velocity, location and direction. In general, when many vehicles likely to use the common route to proceed to the same destination, it can lead to a congested route that should be avoided. It may be better if vehicles are able to predict accurately the traffic congestion and then avoid it. Therefore, in this work, the deep reinforcement learning in VANET to enhance the ability to predict traffic congestion on the roads is proposed. Furthermore, different types of neural networks namely Convolutional Neural Network (CNN), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) are investigated and compared in this deep reinforcement learning model to discover the most effective one. Our proposed method is tested by simulation. The traffic scenarios are created using traffic simulator called Simulation of Urban Mobility (SUMO) before integrating with deep reinforcement learning model. The simulation procedures, as well as the programming used, are described in detail. The performance of our proposed method is evaluated using two metrics; the average travelling time delay and average waiting time delay of vehicles. According to the simulation results, the average travelling time delay and average waiting time delay are gradually improved over the multiple runs, since our proposed method receives feedback from the environment. In addition, the results without and with three different deep learning algorithms, i.e., CNN, MLP and LSTM are compared. It is obvious that the deep reinforcement learning model works effectively when traffic density is neither too high nor too low. In addition, it can be concluded that the effective algorithms for traffic congestion prediction models in descending order are MLP, CNN, and LSTM, respectively.
TRAFFIC CONGESTION PREDICTION USING DEEP REINFORCEMENT LEARNING IN VEHICULAR ...IJCNCJournal
In recent years, a new wireless network called vehicular ad-hoc network (VANET), has become a popular research topic. VANET allows communication among vehicles and with roadside units by providing information to each other, such as vehicle velocity, location and direction. In general, when many vehicles likely to use the common route to proceed to the same destination, it can lead to a congested route that should be avoided. It may be better if vehicles are able to predict accurately the traffic congestion and then avoid it. Therefore, in this work, the deep reinforcement learning in VANET to enhance the ability to predict traffic congestion on the roads is proposed. Furthermore, different types of neural networks namely Convolutional Neural Network (CNN), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) are investigated and compared in this deep reinforcement learning model to discover the most effective one. Our proposed method is tested by simulation. The traffic scenarios are created using traffic simulator called Simulation of Urban Mobility (SUMO) before integrating with deep reinforcement learning model. The simulation procedures, as well as the programming used, are described in detail. The performance of our proposed method is evaluated using two metrics; the average travelling time delay and average waiting time delay of vehicles. According to the simulation results, the average travelling time delay and average waiting time delay are gradually improved over the multiple runs, since our proposed method receives feedback from the environment. In addition, the results without and with three different deep learning algorithms, i.e., CNN, MLP and LSTM are compared. It is obvious that the deep reinforcement learning model works effectively when traffic density is neither too high nor too low. In addition, it can be concluded that the effective algorithms for traffic congestion prediction models in descending order are MLP, CNN, and LSTM, respectively.
Similar to Traffic Prediction from Street Network images.pptx (20)
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
2. Why this?
◦ Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and
complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction
tasks and often neglect spatial and temporal dependencies.
◦ Traffic Prediction is something we want to incorporate as a part of our Energy transition projects like Sibyl
project aimed to target most people across the world to switch to Electric Vehicle by developing that network
and making it optimal and affordable. We, as Shell, have already done this for Germany, Netherlands, The UK,
The US(ongoing) and Singapore (Coming up from Jan,2022 of which I am the lead)
◦ Traffic Prediction is usually a challenge because of many reasons, primarily data availability, then we can’t
expect the streets to be having a nice grid structure, then comes the data preparation and finally using the
proper model for prediction. I have personally gone through these challenges lately which we will discuss today.
3. What is this?
◦ Transportation plays a vital role in everybody’s daily life. According to a survey in 2015, U.S. drivers spend about 48
minutes on average behind the wheel daily. [Ahmed and Cook, 1979]
◦ Under this circumstance, accurate real-time forecast of traffic conditions is of paramount importance for road users,
private sectors and governments. Widely used transportation services, such as flow control, route planning, and
navigation, also rely heavily on a high-quality traffic condition evaluation.
◦ In general, multiscale traffic forecast is the premise and foundation of urban traffic control and guidance, which is also
one of main functions of the Intelligent Transportation System (ITS).
◦ In the traffic study, fundamental variables of traffic flow, namely speed, volume, and density are typically chosen as
indicators to monitor the current status of traffic conditions and to predict the future.
◦ Based on the length of prediction, traffic forecast is generally classified into two scales: short-term (5 ∼ 30 min),
medium and long term (over 30 min).
4. Challenges:
◦ Most prevalent statistical approaches (for example, linear regression) are able to perform well
on short interval forecast. However, due to the uncertainty and complexity of traffic flow, those
methods are less effective for relatively long-term predictions.
◦ Previous studies on mid-and-long term traffic prediction can be roughly divided into two
categories: dynamical modeling and data-driven methods.
◦ Dynamical modeling uses mathematical tools (e.g. differential equations) and physical
knowledge to formulate traffic problems by computational simulation [Vlahogianni,2015]
◦ To achieve a steady state, the simulation process not only requires sophisticated systematic
programming but also consumes massive computational power. Impractical assumptions and
simplifications among the modeling also degrade the prediction accuracy. Therefore, with rapid
development of traffic data collection and storage techniques, a large group of researchers are
shifting their attention to data-driven approaches.
◦ Classic statistical and machine learning models are two major representatives of data-driven
methods.
4
Chirantan Gupta, Data Scientist, Royal Dutch Shell
5. Challenges:
◦ In time series analysis, autoregressive integrated moving average (ARIMA) and its variants are
one of the most consolidated approaches based on classical statistics [Ahmed and Cook, 1979
;Williams and Hoel,2003] However, this type of model is limited by the stationary assumption of
time sequences and fails to take the spatio-temporal correlation into account.
◦ Therefore, these approaches have constrained representability of highly nonlinear traffic flow.
Recently, classic statistical models have been vigorously challenged by machine learning
methods on traffic prediction tasks.
◦ Higher prediction accuracy and more complex data modeling can be achieved by these models,
such as k-nearest neighbors algorithm (KNN), support vector machine (SVM), and neural
networks (NN).
5
Chirantan Gupta, Data Scientist, Royal Dutch Shell
6. Recent Attempts
DEEP LEARNING APPROACHES:
◦ Deep learning approaches have been widely and successfully applied to various traffic tasks
nowadays. Significant progress has been made in related work, for instance, deep belief network
(DBN) [Jia et al., 2016; Huang et al., 2014](For reference I can share the details of what is DBN
which is in the auxiliary material courtesy to Medium Blog Post and which I can refer to)
◦ However, it is difficult for these dense networks to extract spatial and temporal features from the input
jointly. Moreover, within narrow constraints or even complete absence of spatial attributes, the
representative ability of these networks would be hindered seriously.
◦ To take full advantage of spatial features, some researchers use convolutional neural network (CNN)
to capture adjacent relations among the traffic network, along with employing recurrent neural
network (RNN) on time axis.
◦ By combining long short-term memory (LSTM) network [Hochreiter and Schmidhuber, 1997] and 1-D
CNN, Wu and Tan [2016] presented a feature-level fused architecture CLTFP for short term traffic
forecast. Although it adopted a straightforward strategy, CLTFP(a novel deep architecture combined
CNN and LSTM to forecast future traffic flow (CLTFP), Wu Yankai et al. 2016) still made the first
attempt to align spatial and temporal regularities.
◦ Afterwards, Shi et al. [2015] proposed the convolutional LSTM, which is an extended fully connected
LSTM (FC-LSTM) with embedded convolutional layers. (For more information on how RNN and
LSTM work I will discuss another article as an auxiliary material for this discussion, courtesy to
Colah, 2015) 6
Chirantan Gupta, Data Scientist, Royal Dutch Shell
7. Shortcomings
◦ Although it adopted a straightforward strategy, CLTFP still made the first attempt to align spatial
and temporal regularities. Afterwards, Shi et al. [2015] proposed the convolutional LSTM, which
is an extended fully connected LSTM (FC-LSTM) with embedded convolutional layers.
◦ However, the normal convolutional operation applied restricts the model to only process grid
structures (e.g. images, videos) rather than general domains.
◦ Meanwhile, recurrent networks for sequence learning require iterative training, which introduces
error accumulation by steps. Additionally, RNN-based networks (including LSTM) are widely
known to be difficult to train and computationally heavy.
7
Chirantan Gupta, Data Scientist, Royal Dutch Shell
8. Remedial Measures
◦ For overcoming these issues, I propose several strategies to effectively model temporal dynamics and spatial dependencies of
traffic flow. To fully utilize spatial information, we model the traffic network by a general graph instead of treating it separately
(e.g. grids or segments). To handle the inherent deficiencies of recurrent networks, we employ a fully convolutional structure
on time axis. Above all, we propose a novel deep learning architecture, the spatial-temporal graph convolutional networks, for
traffic forecasting tasks. This architecture comprises several spatial-temporal convolutional blocks, which are a combination of
graph convolutional layers [Defferrard et al., 2016] and convolutional sequence learning layers, to model spatial and temporal
dependencies.
NOVELTY:
◦ To the best of my knowledge, it is the first time I am attempting to apply purely convolutional structures to extract spatial-
temporal features simultaneously from graph-structured time series in a traffic study. We evaluate our proposed model on two
real world traffic datasets. Experiments show that our framework outperforms existing baselines in prediction tasks with
multiple preset prediction lengths and network scales.
8
Chirantan Gupta, Data Scientist, Royal Dutch Shell
9. Traffic Prediction
on Road Graphs
Traffic forecast is a typical time-
series prediction problem, i.e.
predicting the most likely traffic
measurements (e.g. speed or
traffic flow) in the next H time
steps given the previous M
traffic observations as in the
left,
9
Chirantan Gupta, Data Scientist, Royal Dutch Shell
Formula for Traffic Prediction that I am
attempting to use
10. Traffic Prediction on Road Graphs
◦ In this work, we define the traffic network on a graph and focus on structured traffic time series. The observation 𝑣𝑡 is not
independent but linked by pairwise connection in graph. Therefore, the data point 𝑣𝑡 can be regarded as a graph signal that is
defined on an undirected graph (or directed one) G with weights 𝑊𝑖𝑗 as here
At the t-th time step, in graph 𝐺𝑡 = (𝑣𝑡, €, 𝑊)
𝑣𝑡 is a finite set of vertices, , corresponding to
the observations from n monitor stations in a
traffic network; € is a set of edges, indicating
the connectedness between stations; while 𝑊 ∈
𝑅𝑛×𝑛 denotes the weighted adjacency matrix of
𝐺𝑡
10
Chirantan Gupta, Data Scientist, Royal Dutch Shell
11. Spectral Convolution
◦ Spectral Convolution In a spectral graph convolution, we perform an Eigen decomposition of
the Laplacian Matrix of the graph. This Eigen decomposition helps us in understanding the
underlying structure of the graph with which we can identify clusters/sub-groups of this graph.
This is done in the Fourier space. An analogy is PCA where we understand the spread of the
data by performing an Eigen Decomposition of the feature matrix. The only difference between
these two methods is with respect to the Eigen values. Smaller Eigen values explain the
structure of the data better in Spectral Convolution whereas it's the opposite in PCA. ChebNet,
GCN are some commonly used Deep learning architectures that use Spectral Convolution
Chirantan Gupta, Data Scientist, Royal Dutch Shell 11
12. Data Acquisition Challenges
◦ Most data to be acquired for this work will have to be purchased from external vendors.
◦ I already had a talk with some key people from Shell like Saptarshi Das , PhD , who has
mentioned that I would be getting all his support regarding my work and data acquisition.
◦ Other than that we as Shell purchase many data from different paid sources, we can use those
datasets as well by taking permission from Shell.
◦ But Data Acquisition is still challenging as usually Open-Source Traffic data even if available, is
either a sample or too small for any research work, hence we need to reach out to different
external vendors like HERE who can help us acquire the same.
Chirantan Gupta, Data Scientist, Royal Dutch Shell 12
13. Convolutions on Graphs
◦ A standard convolution for regular grids is clearly not applicable to general graphs. There are
two basic approaches currently exploring how to generalize CNNs to structured data forms.
One is to expand the spatial definition of a convolution [Niepert et al., 2016], and the other is
to manipulate in the spectral domain with graph Fourier transforms [Bruna et al., 2013].
◦ The former approach rearranges the vertices into certain grid forms which can be processed by
normal convolutional operations. The latter one introduces the spectral framework to apply
convolutions in spectral domains, often named as the spectral graph convolution
13
Chirantan Gupta, Data Scientist, Royal Dutch Shell
14. What was the result?
◦ Several followingup studies make the graph convolution more promising by reducing the computational complexity from O(𝑛2
)
to linear [Defferrard et al., 2016; Kipf and Welling, 2016].
14
Chirantan Gupta, Data Scientist, Royal Dutch Shell
15. Continued..
15
Chirantan Gupta, Data Scientist, Royal Dutch Shell
◦ Θ ∗G x ≈ θ0x + θ1(
2
λ𝑚𝑎𝑥
L − In)x ≈ θ0x − θ1(𝐷
1
2𝑊𝐷−
1
2) x
◦ where Θ0, Θ1 are two shared parameters of the kernel. In order to constrain parameters and
stabilize numerical performances Θ0 and Θ1 , are replaced by a single parameter θ by letting θ = Θ0
= - Θ1; W and D are renormalized by
◦ W = W + 𝐼𝑛 𝑎𝑛𝑑 𝐷𝑖𝑗 = Σ𝑗W𝑖𝑗 separately
◦ Then the Graph Convolution Graph can be written as:
◦ Θ ∗G x = θ(𝐼𝑛 + 𝐷
1
2𝑊𝐷−
1
2) x = θ(𝐷
1
2𝑊𝐷−
1
2) x
16. Why Laplace Transform?
◦ The purpose of the Laplace Transform is to transform ordinary differential equations (ODEs) into algebraic
equations, which makes it easier to solve ODEs. However, the Laplace Transform gives one more than that: it also
does provide qualitative information on the solution of the ODEs (the prime example is the famous final value
theorem lim
𝑡→ ∞
𝑓 𝑡 = lim
𝑠→0
sF(s)
◦ Not all functions have a Fourier Transform. The Laplace Transform is a generalized Fourier Transform, since it allows one to
obtain transforms of functions that have no Fourier Transforms. Does your function f(t) grow exponentially with time? Then it
has no FT. No problem, just multiply it by a decaying exponential that decays faster than f grows, and you now have a function
that has a Fourier Transform! The FT of that new function is the LT (evaluated on a line on the complex plane parallel to the
imaginary axis)
16
Chirantan Gupta, Data Scientist, Royal Dutch Shell
17. Proposed Model
◦ Composed of several spatiotemporal convolutional blocks, each of which is formed as a “sandwich” structure with two gated
sequential convolution layers and one spatial graph convolution layer in between.
17
Chirantan Gupta, Data Scientist, Royal Dutch Shell
Architecture of spatio-temporal graph convolutional
net- works. The framework STGCN consists of two
spatio-temporal layer in the end. Each ST-Conv block
contains two temporal gated convolution layers and
one spatial graph convolution layer in the middle.
The residual connection and bottleneck strategy are
applied
inside each block. The input 𝑣𝑡−𝑀+1 , ..., 𝑣𝑡 is
uniformly processed by ST-Conv blocks to explore
spatial and temporal dependencies co- herently.
Comprehensive features are integrated by an output
layer to generate the final prediction v
ˆ.
18. Graph CNNs for Extracting Spatial Features
◦ The traffic network generally organizes as a graph structure. It is natural and reasonable to formulate road networks as graphs
mathematically. However, previous studies neglect spatial attributes of traffic networks: the connectivity and globality of the
networks are overlooked, since they are split into multiple segments or grids. Even with 2-D convolutions on grids, it can only
capture the spatial locality roughly due to compromises of data modeling. Accordingly, in our model, the graph convolution is
employed directly on graph structured data to extract highly meaningful patterns and features in the space domain.
◦ The computation of the Kernel 𝜃 in graph convolution as described earlier can be expensive due to O(𝑛2
) multiplications with
Graph Fourier Basis, U
◦ Can we improve it? Perhaps using Chebyshev Polynomials Approximation
18
Chirantan Gupta, Data Scientist, Royal Dutch Shell
19. Chebyshev’s Polynomial
◦ To localize the filter and reduce the number of parameters, the kernel Θ can be restricted to a polynomial of Λ as Θ(Λ) =
𝑘=0
𝐾−1
𝜃𝑘 Λ𝑘
, 𝑤ℎ𝑒𝑟𝑒 θ ∈ 𝑅𝐾
K is a vector of polynomial coefficients. K is the kernel size of graph convolution, which determines
the maximum radius of the convolution from central nodes. Traditionally, Chebyshev polynomial 𝑇𝑘(𝑥 )is used to approximate
kernels as a truncated expansion of order K−1 as Θ(Λ) ≈ 𝑘=0
𝐾−1
𝜃𝑘 𝑇𝑘(Λ) with rescaled Λ =
2Λ
Λ𝑚𝑎𝑥− 𝐼𝑛
, where Λ𝑚𝑎𝑥 denotes the
largest eigenvalue of L) [Hammond et al., 2011]. The graph convolution can then be rewritten as,
This can reduce the
time complexity to
O(K𝜖) where 𝜖 is
the number of
edges of the graph
19
Chirantan Gupta, Data Scientist, Royal Dutch Shell
20. Dataset Description
◦ Datasets: https://drive.google.com/drive/folders/10FOTa6HXPqX8Pf5WRoRwcFnW9BrNZEIX
- adj_mat.npy: Weighted Adjacency Matrix with shape of [num_road * num_road].
-node_values.npy: Historical Speed Records with shape of [len_seq * num_road] (len_seq = day_slot *
num_dates).
- num_route : number of routes in your dataset
◦ PeMSD7 was collected from Caltrans Performance Measurement System (PeMS) in real-time by over
39, 000 sensor stations, deployed across the major metropolitan areas of California state highway
system [Chen et al., 2001]. The dataset is also aggregated into 5-minute interval from 30-second
data samples. We randomly select a medium and a large scale among the District 7 of California
containing 228 and 1, 026 stations, labeled as PeMSD7(M) and PeMSD7(L), respectively, as data
sources
Chirantan Gupta, Data Scientist, Royal Dutch Shell 20
22. What is a .npy file?
◦ An NPY file is a NumPy array file created by the
Python software package with the NumPy library
installed. It contains an array saved in the NumPy
(NPY) file format. NPY files store all the
information required to reconstruct an array on
any computer, which includes dtype and shape
information.
◦ How to open it?
◦ A sample example is given to the left
(https://stackoverflow.com/questions/33885051/
using-spyder-python-to-open-npy-file )
Chirantan Gupta, Data Scientist, Royal Dutch Shell 22
23. What does this code do?
Chirantan Gupta, Data Scientist, Royal Dutch Shell 23
24. Chirantan Gupta, Data Scientist, Royal Dutch Shell 24
Implementation
Details: Code
written by me
Datasets:
https://drive.google.com/drive/
folders/10FOTa6HXPqX8Pf5WRo
RwcFnW9BrNZEIX
adj_mat.npy
node_values.npy
27. Results Obtained
Compared with FNN and
ARIMA.
Conclusion: Got better MAE.
I although did not consider the
original dataset as it needed to
be purchased and I got an
opensource sample that I used
for my comparison.
Chirantan Gupta, Data Scientist, Royal Dutch Shell 27
Training loss: 0.12442259042730007
Validation loss: 0.13693484663963318
Validation MAE: 3.4758996963500977
After 120 epochs. After this The test loss tend to increase again,
but beware we may have yet another decline from the top or
zenith when and if it reaches there. If we allow a long number of
epochs , we can do that . Here number of epochs was initially
taken = 1000, and batch size = 50, but I had to stop it after 120
epochs
Compared with the validation MAE on the same dataset using FNN
is 4.02, ARIMA is 5.86 for a sample of PeMSD7(M)and PeMSD7(L)(
Source: Spatio-Temporal Graph Convolutional Networks: A Deep
Learning Framework for Traffic Forecasting by Yu et al. , 2018)
28. Plot of the training and validation loss
Chirantan Gupta, Data Scientist, Royal Dutch Shell 28
29. Data Preprocessing(Yu et al. 2018)
◦ The standard time interval in two datasets is set to 5 minutes. Thus, every node of the road graph contains 288 data points per
day. The linear interpolation method is used to fill missing values after data cleaning. In addition, data input are normalized by
Z-Score method.
◦ In PeMSD7, the adjacency matrix of the road graph is computed based on the distances among stations in the traffic network.
The weighted adjacency matrix W can be formed as,
29
Chirantan Gupta, Data Scientist, Royal Dutch Shell
30. Experimental Settings and claim(Yu et al. 2018)
◦ All experiments are compiled and tested on a Linux cluster (CPU: Intel(R) Xeon(R) CPU E5-2620
v4 @ 2.10GHz, GPU: NVIDIA GeForce GTX 1080). In order to eliminate atypical traffic, only
workday traffic data are adopted in our experiment [Li et al., 2015]. We execute grid search
strategy to locate the best parameters on validations. All the tests use 60 minutes as the
historical time window, a.k.a. 12 observed data points (M = 12) are used to forecast traffic
conditions in the next 15, 30, and 45 minutes (H = 3, 6, 9).
◦ In my case I did not have the GPU installed and hence the performance can be immensely
delayed due to not having GPU.
◦ To measure and evaluate the performance of different methods, Mean Absolute Errors (MAE),
Mean Absolute Percentage Errors (MAPE), and Root Mean Squared Errors (RMSE) are adopted
◦ .
30
Chirantan Gupta, Data Scientist, Royal Dutch Shell
31. Claim (Yu at al. 2018)
◦ The claim by this paper is: STGCN performs better than Historical Average (HA), Auto-
Regressive Integrated Moving Average (ARIMA), Feed-Forward Neural Network (FNN), Full-
Connected LSTM (FC-LSTM) [Sutskever et al., 2014]
◦ PeMSD7(M/L), the channels of three layers in ST-Conv block are 64, 16, 64 respectively. Both
the graph convolution kernel size K and temporal convolution kernel size Kt are set to 3 in the
model.
◦ With the Chebyshev polynomials approximation, while the K is set to 1 in the model
STGCN(1st) with the 1st-order approximation. We train our models by minimizing the mean
square error using RMSprop for 50 epochs with batch size as 50. The initial learning rate is 10−3
with a decay rate of 0.7 after every 5 epochs
Chirantan Gupta, Data Scientist, Royal Dutch Shell 31
32. Claim (Yu et al. 2018)
Proposed model achieves the best performance with statistical significance (two-tailed T-test, α =
0.01, P < 0.01) in all three evaluation metrics. We can easily observe that traditional statistical
and machine learning methods may perform well for short-term forecasting, but their long-term
predictions are not accurate because of error accumulation, memorization issues, and absence
of spatial information. ARIMA model performs the worst due to its incapability of handling
complex spatio-temporal data. Deep learning approaches generally achieved better prediction
results than traditional machine learning models(CLAIM)
Benefits of Spatial Topology
Previous methods did not incorporate spatial topology and modeled the time series in a coarse-
grained way. Differently, through modeling spatial topology of the sensors, our model STGCN has
achieved a significant improvement on short and mid-and-long term forecasting
32
Chirantan Gupta, Data Scientist, Royal Dutch Shell
33. Results (Yu et al., 2018)
Claim is: STGCN captures the trend of
rush hours more accurately than other
methods; and it detects the ending of the
rush hours earlier than others. Stemming
from the efficient graph convolution and
stacked temporal convolution structures,
our model is capable of fast responding to
the dynamic changes among the traffic
network without over-reliance on
historical average as most of recurrent
networks do.
33
Chirantan Gupta, Data Scientist, Royal Dutch Shell
34. Conclusion and Future Work
◦ In this paper(Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic
Forecasting by Yu et al., 2018), claim is that it is a novel deep learning framework STGCN for traffic
prediction, integrating graph convolution and gated temporal convolution through spatio-temporal
convolutional blocks. Experiments show that our model outperforms other state-of-the-art methods on
two real-world datasets, indicating its great potentials on exploring spatiotemporal structures from the
input. It also achieves faster training, easier convergences, and fewer parameters with flexibility and
scalability. These features are quite promising and practical for scholarly development and large-scale
industry deployment. In the future, we will further optimize the network structure and parameter
settings. Moreover, our proposed framework can be applied into more general spatiotemporal structured
sequence forecasting scenarios, such as evolving of social networks, and preference prediction in
recommendation systems, etc.
34
Chirantan Gupta, Data Scientist, Royal Dutch Shell
35. Another improvement suggested by me
◦ Instead of taking the entire graph, we can take only a selected part of the graph by drawing a 5 km drive time distance that
gives us a smaller graph thus computation time will be smaller.
◦ For example for the city of Hamburg, Germany, I was given a shape file which contained traffic information of trucks and cars in
both directions on different streets, after the data was cleansed properly using Alteryx, I used the Haversine Distance to
compute the distance between the given point with lat and long and then I computed the distances between the mid points of
the streets from that point and thus creating a polygon buffer or a smaller shape file.
. Completed the Distance Computation using Haversein’s Distance Formula:
(Latitude, Longitude)-> 53.543425, 10.021088
35
Chirantan Gupta, Data Scientist, Royal Dutch Shell
36. My Results against the HERE API(NOT OPEN
SOURCE)You can see not much of a difference here
36
Chirantan Gupta, Data Scientist, Royal Dutch Shell
37. How is Shell going to be benefitted?
◦ As Shell is transitioning faster than most, if not all, oil, gas and energy giants across the globe towards 0 Carbon emission. Given
that, Shell has already started working at a breakneck speed to promote and introduce Electric Vehicle Charging Stations across
the globe where we think it is most likely going to be accepted faster. We are introducing cheaper and faster charging networks
in different advanced markets.
◦ Currently we have already done so for China, Germany, Netherlands, The UK, The US, and I am leading the same work for
Singapore.
◦ One of the major variables that we take into context is traffic flow and especially the futuristic prediction of the same will help
our work easier and more robust.
◦ We are not just interested in deploying these stations but also maintaining them. Hence a sudden increase or decrease of
traffic flow which we know from various paid /unpaid data sources are for the past few months and/or weeks or computed on
an average for a certain extent of time.
◦ But futuristic prediction is still something the energy industry and academia both working on and hence this is of very high
importance for us as Shell.
Chirantan Gupta, Data Scientist, Royal Dutch Shell 37
38. References:
◦ I have implemented the paper Spatio-Temporal Graph Convolutional Networks: A Deep Learning
Framework for Traffic Forecasting Bing Yu, Haoteng Yin, Zhanxing (https://arxiv.org/abs/1709.04875v3)
◦ Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale
Traffic Learning and Forecasting by Zhiyong Cui et al. (IEEE)
◦ Language Modeling with Gated Convolutional Networks Yann N. Dauphin et al.
(https://arxiv.org/pdf/1612.08083v3.pdf)
◦ SIMPLE SPECTRAL GRAPH CONVOLUTION Hao Zhu, Piotr Koniusz(Published as a conference paper at ICLR
2021)
◦ Traffic Flow Prediction Using Graph Convolution Neural Networks(10th International Conference on
Information Science and Technology Bath, United Kingdom, September 9-15, 2020 by Anton
Agafonov)
38
Chirantan Gupta, Data Scientist, Royal Dutch Shell
39. References:
◦ Dynamic Spatial-Temporal Graph Convolutional Neural Networks for Traffic Forecasting(The
Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) by Diao et al. , link:
https://ojs.aaai.org/index.php/AAAI/article/download/3877/3755)
◦ Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting
(The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) by Guo et al. , link:
https://ojs.aaai.org/index.php/AAAI/article/download/3881/3759 )
◦ Bike Flow Prediction with Multi-Graph Convolutional Networks( Chai et al., 2018, link:
https://arxiv.org/pdf/1807.10934 )
Chirantan Gupta, Data Scientist, Royal Dutch Shell 39