The document provides an overview of artificial intelligence planning, including definitions and key concepts. It describes planning as a deliberative process that chooses and organizes actions to achieve goals. A conceptual model of planning problems uses state-transition systems and graphs to represent the problem. The document contrasts toy problems from real-world planning problems and discusses formulating planning as a search problem to find a solution path.
this is ppt on the topic of heuristic search techniques or we can also known it by the name of informed search techniques.
in this presentation we only disscuss about three search techniques there are lot of them by the most important once are in this presentation.
This document discusses various topics in artificial intelligence planning, including:
1. Planning involves automatically generating action strategies (plans) from descriptions of actions, sensors, and goals.
2. Key elements of planning include representation languages, mathematical models, and algorithms for solving planning problems effectively.
3. Classical planning assumes deterministic state models, while planning with uncertainty involves non-deterministic or probabilistic state transitions. Planning with sensing adds the ability to gather observations.
The document outlines the course contents for a theory course on machine learning. It covers 5 units: (1) introduction to machine learning concepts including regression, probability, statistics, linear algebra, convex optimization, and data preprocessing; (2) linear and nonlinear models including neural networks, loss functions, and regularization; (3) convolutional neural networks; (4) recurrent neural networks; and (5) support vector machines and applications of machine learning. It also lists recommended textbooks on pattern recognition, machine learning, and deep learning.
The document discusses various approaches to artificial intelligence planning, including state-space planning and plan-space planning. State-space planning formulates finding a plan as a search through the space of possible states, using either forward progression search or backward regression search. Plan-space planning formulates finding a plan as a search through the space of possible partial plans, starting with an initial partial plan and iteratively improving it. Specific planning techniques discussed include STRIPS planning, which represents states and actions using logical predicates, and progression and regression planners, which use forward and backward search respectively through the state space.
This document provides an overview of an artificial intelligence course syllabus and key concepts in AI. It discusses topics like what AI is, the foundations and history of AI, production systems, state space search techniques including informed and uninformed searches. It also covers knowledge representation, reasoning, machine learning, computer vision, robotics and common AI problems. Key problems in AI like perception, natural language understanding, commonsense reasoning and more are explained.
Machine learning is presented by Pranay Rajput. The agenda includes an introduction to machine learning, basics, classification, regression, clustering, distance metrics, and use cases. ML allows computer programs to learn from experience to improve performance on tasks. Supervised learning predicts labels or targets while unsupervised learning finds hidden patterns in unlabeled data. Popular algorithms include classification, regression, and clustering. Classification predicts class labels, regression predicts continuous values, and clustering groups similar data points. Distance metrics like Euclidean, Manhattan, and cosine are used in ML models to measure similarity between data points. Common applications involve recommendation systems, computer vision, natural language processing, and fraud detection. Popular frameworks for ML include scikit-learn, TensorFlow, Keras
Prediction of route and destination intent shibumon alampattaShibu Alampatta
This document discusses predicting a driver's intended route and destination using a Hidden Markov Model approach. It begins by outlining the problem and applications, such as route guidance and fuel efficiency. It then discusses using a probabilistic model to capture the sequential and routine nature of driving behavior. The model represents routes as a graph and states as links and goals. It is trained on past trip data to learn transition probabilities and predict the next link and destination. The approach achieves over 80% accuracy on average but has limitations for non-routine trips. Possible enhancements include additional parameters and expanding to other domains.
this is ppt on the topic of heuristic search techniques or we can also known it by the name of informed search techniques.
in this presentation we only disscuss about three search techniques there are lot of them by the most important once are in this presentation.
This document discusses various topics in artificial intelligence planning, including:
1. Planning involves automatically generating action strategies (plans) from descriptions of actions, sensors, and goals.
2. Key elements of planning include representation languages, mathematical models, and algorithms for solving planning problems effectively.
3. Classical planning assumes deterministic state models, while planning with uncertainty involves non-deterministic or probabilistic state transitions. Planning with sensing adds the ability to gather observations.
The document outlines the course contents for a theory course on machine learning. It covers 5 units: (1) introduction to machine learning concepts including regression, probability, statistics, linear algebra, convex optimization, and data preprocessing; (2) linear and nonlinear models including neural networks, loss functions, and regularization; (3) convolutional neural networks; (4) recurrent neural networks; and (5) support vector machines and applications of machine learning. It also lists recommended textbooks on pattern recognition, machine learning, and deep learning.
The document discusses various approaches to artificial intelligence planning, including state-space planning and plan-space planning. State-space planning formulates finding a plan as a search through the space of possible states, using either forward progression search or backward regression search. Plan-space planning formulates finding a plan as a search through the space of possible partial plans, starting with an initial partial plan and iteratively improving it. Specific planning techniques discussed include STRIPS planning, which represents states and actions using logical predicates, and progression and regression planners, which use forward and backward search respectively through the state space.
This document provides an overview of an artificial intelligence course syllabus and key concepts in AI. It discusses topics like what AI is, the foundations and history of AI, production systems, state space search techniques including informed and uninformed searches. It also covers knowledge representation, reasoning, machine learning, computer vision, robotics and common AI problems. Key problems in AI like perception, natural language understanding, commonsense reasoning and more are explained.
Machine learning is presented by Pranay Rajput. The agenda includes an introduction to machine learning, basics, classification, regression, clustering, distance metrics, and use cases. ML allows computer programs to learn from experience to improve performance on tasks. Supervised learning predicts labels or targets while unsupervised learning finds hidden patterns in unlabeled data. Popular algorithms include classification, regression, and clustering. Classification predicts class labels, regression predicts continuous values, and clustering groups similar data points. Distance metrics like Euclidean, Manhattan, and cosine are used in ML models to measure similarity between data points. Common applications involve recommendation systems, computer vision, natural language processing, and fraud detection. Popular frameworks for ML include scikit-learn, TensorFlow, Keras
Prediction of route and destination intent shibumon alampattaShibu Alampatta
This document discusses predicting a driver's intended route and destination using a Hidden Markov Model approach. It begins by outlining the problem and applications, such as route guidance and fuel efficiency. It then discusses using a probabilistic model to capture the sequential and routine nature of driving behavior. The model represents routes as a graph and states as links and goals. It is trained on past trip data to learn transition probabilities and predict the next link and destination. The approach achieves over 80% accuracy on average but has limitations for non-routine trips. Possible enhancements include additional parameters and expanding to other domains.
This document provides an overview of concepts in artificial intelligence robotics, including definitions of robots, tasks robots can perform, components of robots like effectors and sensors, and approaches to agent architectures, localization, mapping, planning, control and reactive control. Key points discussed include defining robots as programmable manipulators that perform tasks, the nondeterministic and dynamic nature of the real world environment, and methods like probabilistic roadmaps and potential fields for planning robot movements.
Dr. Subrat Panda gave an introduction to reinforcement learning. He defined reinforcement learning as dealing with agents that must sense and act upon their environment to receive delayed scalar feedback in the form of rewards. He described key concepts like the Markov decision process framework, value functions, Q-functions, exploration vs exploitation, and extensions like deep reinforcement learning. He listed several real-world applications of reinforcement learning and resources for learning more.
Dr. Subrat Panda gave an overview of reinforcement learning. He defined reinforcement learning as dealing with agents that must sense and act upon their environment to receive delayed scalar feedback in the form of rewards. The goal is to learn an optimal policy that maps states to actions to maximize total future discounted reward. Q-learning is introduced as a way to estimate state-action values directly without needing a model of the environment. Q-learning updates estimates based on new observations and prior estimates in a process called bootstrapping. Exploration must be balanced with exploitation of current knowledge. Real-world applications of deep reinforcement learning were discussed.
The document discusses general problem solving in artificial intelligence. It defines key concepts like problem space, state space, operators, initial and goal states. Problem solving involves searching the state space to find a path from the initial to the goal state. Different search algorithms can be used, like depth-first search and breadth-first search. Heuristic functions can guide searches to improve efficiency. Constraint satisfaction problems are another class of problems that can be solved using techniques like backtracking.
This document summarizes various search algorithms and toy problems that are used to illustrate problem solving by searching. It begins by introducing problem solving as finding a sequence of actions to achieve a goal from an initial state. It then discusses uninformed search strategies like breadth-first, depth-first, and uniform cost search. Several toy problems are presented, including the 8-puzzle, vacuum world, missionaries and cannibals problem. Real-world problems involving route finding, VLSI layout, and robot navigation are also briefly described. Evaluation criteria for search algorithms like space/time complexity and optimality/completeness are covered. Finally, iterative deepening search is introduced as a way to overcome depth limitations.
In this short presentation, we will provide some recent developments in the field of crowd monitoring, modelling and management. We will illustrate these by showing various projects that we are involved in, including the SmartStation project, and the different events organised in and around the city of Amsterdam (including the Europride, SAIL, etc.).
In the talk, we will discuss the different components of the system and the methods and technology involved in these. We focus on advanced data collection techniques, the use of social media data, data fusion and the advanced macroscopic modelling required for this. Also, we will show examples of interventions that have been tested, showing how these systems are used in practise.
Planning involves finding a sequence of actions that achieves a goal state from an initial state. It can be formulated as a state-space search problem. Progression planners search forward from the initial state, while regression planners search backward from the goal state. Partial-order planning delays commitments to action ordering, allowing more flexible plans. Planning graphs compactly represent interactions between actions and can provide better heuristics for search.
The document discusses planning and navigation for robots. It covers competencies for navigation including localization, mapping, and motion planning. It outlines topics like motion planning in state-space and configuration space, global motion planning algorithms, and local collision avoidance methods. It describes representations for maps like topological maps, grid maps, and kd-trees. It also summarizes various path planning algorithms including potential fields, graph searches, sampling-based approaches, and obstacle avoidance strategies.
This document provides an overview of machine learning concepts and techniques including linear regression, logistic regression, unsupervised learning, and k-means clustering. It discusses how machine learning involves using data to train models that can then be used to make predictions on new data. Key machine learning types covered are supervised learning (regression, classification), unsupervised learning (clustering), and reinforcement learning. Example machine learning applications are also mentioned such as spam filtering, recommender systems, and autonomous vehicles.
ODSC India 2018: Topological space creation & Clustering at BigData scaleKuldeep Jiwani
Kuldeep Jiwani's presentation discusses topological spaces and manifolds for modeling high-dimensional data geometries, and how this relates to clustering large datasets. It introduces topological spaces, metric spaces, manifolds, and their properties. It then discusses using global and local manifolds to model data geometries for clustering. The presentation also covers building distance matrices for large datasets in a distributed manner using optimizations like reducing shuffles, sparsity, and cut-offs. It concludes with using GraphX/GraphFrames to implement distributed DBSCAN clustering on the graph representation.
Calculus involves the study of limits, derivatives, and integrals to understand changes in quantities. It was developed by Newton and Leibniz and is divided into differential and integral calculus. Differential calculus examines rates of change, while integral calculus concerns quantities given rates of change. Calculus is applied in fields like science, technology, physics, and engineering to model real-world systems and problems.
Calculus is the major part of Mathematis. This theoretical presentation covered all relevant definations and systematic review points about calculus. It also brings and promote you towards in advance mathematics.
A Lightweight Infrastructure for Graph AnalyticsDonald Nguyen
Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this pa- per, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of these tasks, and (iii) allows application-specific control of task scheduling policies. To support this claim, we describe such an implementation called the Galois system.
We demonstrate the capabilities of this infrastructure in three ways. First, we implement more sophisticated algorithms for some of the graph analytics problems tack- led by previous DSLs and show that end-to-end performance can be improved by orders of magnitude even on power-law graphs, thanks to the better algorithms facilitated by a more general programming model. Second, we show that, even when an algorithm can be expressed in existing DSLs, the implementation of that algorithm in the more general system can be orders of magnitude faster when the input graphs are road networks and similar graphs with high diameter, thanks to more sophisticated scheduling. Third, we implement the APIs of three existing graph DSLs on top of the common infrastructure in a few hundred lines of code and show that even for power-law graphs, the performance of the resulting implementations often exceeds that of the original DSL systems, thanks to the lightweight infrastructure.
The document discusses problem solving agents and search algorithms. It defines a search problem as having an initial state, possible actions or operators that change states, a goal test to determine if a state is the goal state, and a path cost function. Search algorithms take a problem as input and systematically examine states by considering sequences of actions to find a lowest cost path from the start to a goal state. Different search techniques include methods that find any solution path, the lowest cost path, or methods that consider an opponent. Examples of search problems discussed are finding parking, the vacuum world problem, and the eight puzzle problem.
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMAvay Minni
This document describes using a genetic algorithm to solve resource constrained project scheduling problems. It begins with an introduction explaining that planning and scheduling projects involves managing many possible solutions and resource allocations. It then provides sections on genetic algorithms, the basic genetic algorithm process, and why genetic algorithms are suitable for this type of optimization problem. The document outlines the general formulation of resource constrained project scheduling as a linear programming problem and provides an example problem scenario. It includes flowcharts and discusses implementing the proposed genetic algorithm solution methodology.
Reinforcement learning is a computational approach for learning through interaction without an explicit teacher. An agent takes actions in various states and receives rewards, allowing it to learn relationships between situations and optimal actions. The goal is to learn a policy that maximizes long-term rewards by balancing exploitation of current knowledge with exploration of new actions. Methods like Q-learning use value function approximation and experience replay in deep neural networks to scale to complex problems with large state spaces like video games. Temporal difference learning combines the advantages of Monte Carlo and dynamic programming by bootstrapping values from current estimates rather than waiting for full episodes.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
This document provides an overview of concepts in artificial intelligence robotics, including definitions of robots, tasks robots can perform, components of robots like effectors and sensors, and approaches to agent architectures, localization, mapping, planning, control and reactive control. Key points discussed include defining robots as programmable manipulators that perform tasks, the nondeterministic and dynamic nature of the real world environment, and methods like probabilistic roadmaps and potential fields for planning robot movements.
Dr. Subrat Panda gave an introduction to reinforcement learning. He defined reinforcement learning as dealing with agents that must sense and act upon their environment to receive delayed scalar feedback in the form of rewards. He described key concepts like the Markov decision process framework, value functions, Q-functions, exploration vs exploitation, and extensions like deep reinforcement learning. He listed several real-world applications of reinforcement learning and resources for learning more.
Dr. Subrat Panda gave an overview of reinforcement learning. He defined reinforcement learning as dealing with agents that must sense and act upon their environment to receive delayed scalar feedback in the form of rewards. The goal is to learn an optimal policy that maps states to actions to maximize total future discounted reward. Q-learning is introduced as a way to estimate state-action values directly without needing a model of the environment. Q-learning updates estimates based on new observations and prior estimates in a process called bootstrapping. Exploration must be balanced with exploitation of current knowledge. Real-world applications of deep reinforcement learning were discussed.
The document discusses general problem solving in artificial intelligence. It defines key concepts like problem space, state space, operators, initial and goal states. Problem solving involves searching the state space to find a path from the initial to the goal state. Different search algorithms can be used, like depth-first search and breadth-first search. Heuristic functions can guide searches to improve efficiency. Constraint satisfaction problems are another class of problems that can be solved using techniques like backtracking.
This document summarizes various search algorithms and toy problems that are used to illustrate problem solving by searching. It begins by introducing problem solving as finding a sequence of actions to achieve a goal from an initial state. It then discusses uninformed search strategies like breadth-first, depth-first, and uniform cost search. Several toy problems are presented, including the 8-puzzle, vacuum world, missionaries and cannibals problem. Real-world problems involving route finding, VLSI layout, and robot navigation are also briefly described. Evaluation criteria for search algorithms like space/time complexity and optimality/completeness are covered. Finally, iterative deepening search is introduced as a way to overcome depth limitations.
In this short presentation, we will provide some recent developments in the field of crowd monitoring, modelling and management. We will illustrate these by showing various projects that we are involved in, including the SmartStation project, and the different events organised in and around the city of Amsterdam (including the Europride, SAIL, etc.).
In the talk, we will discuss the different components of the system and the methods and technology involved in these. We focus on advanced data collection techniques, the use of social media data, data fusion and the advanced macroscopic modelling required for this. Also, we will show examples of interventions that have been tested, showing how these systems are used in practise.
Planning involves finding a sequence of actions that achieves a goal state from an initial state. It can be formulated as a state-space search problem. Progression planners search forward from the initial state, while regression planners search backward from the goal state. Partial-order planning delays commitments to action ordering, allowing more flexible plans. Planning graphs compactly represent interactions between actions and can provide better heuristics for search.
The document discusses planning and navigation for robots. It covers competencies for navigation including localization, mapping, and motion planning. It outlines topics like motion planning in state-space and configuration space, global motion planning algorithms, and local collision avoidance methods. It describes representations for maps like topological maps, grid maps, and kd-trees. It also summarizes various path planning algorithms including potential fields, graph searches, sampling-based approaches, and obstacle avoidance strategies.
This document provides an overview of machine learning concepts and techniques including linear regression, logistic regression, unsupervised learning, and k-means clustering. It discusses how machine learning involves using data to train models that can then be used to make predictions on new data. Key machine learning types covered are supervised learning (regression, classification), unsupervised learning (clustering), and reinforcement learning. Example machine learning applications are also mentioned such as spam filtering, recommender systems, and autonomous vehicles.
ODSC India 2018: Topological space creation & Clustering at BigData scaleKuldeep Jiwani
Kuldeep Jiwani's presentation discusses topological spaces and manifolds for modeling high-dimensional data geometries, and how this relates to clustering large datasets. It introduces topological spaces, metric spaces, manifolds, and their properties. It then discusses using global and local manifolds to model data geometries for clustering. The presentation also covers building distance matrices for large datasets in a distributed manner using optimizations like reducing shuffles, sparsity, and cut-offs. It concludes with using GraphX/GraphFrames to implement distributed DBSCAN clustering on the graph representation.
Calculus involves the study of limits, derivatives, and integrals to understand changes in quantities. It was developed by Newton and Leibniz and is divided into differential and integral calculus. Differential calculus examines rates of change, while integral calculus concerns quantities given rates of change. Calculus is applied in fields like science, technology, physics, and engineering to model real-world systems and problems.
Calculus is the major part of Mathematis. This theoretical presentation covered all relevant definations and systematic review points about calculus. It also brings and promote you towards in advance mathematics.
A Lightweight Infrastructure for Graph AnalyticsDonald Nguyen
Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this pa- per, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of these tasks, and (iii) allows application-specific control of task scheduling policies. To support this claim, we describe such an implementation called the Galois system.
We demonstrate the capabilities of this infrastructure in three ways. First, we implement more sophisticated algorithms for some of the graph analytics problems tack- led by previous DSLs and show that end-to-end performance can be improved by orders of magnitude even on power-law graphs, thanks to the better algorithms facilitated by a more general programming model. Second, we show that, even when an algorithm can be expressed in existing DSLs, the implementation of that algorithm in the more general system can be orders of magnitude faster when the input graphs are road networks and similar graphs with high diameter, thanks to more sophisticated scheduling. Third, we implement the APIs of three existing graph DSLs on top of the common infrastructure in a few hundred lines of code and show that even for power-law graphs, the performance of the resulting implementations often exceeds that of the original DSL systems, thanks to the lightweight infrastructure.
The document discusses problem solving agents and search algorithms. It defines a search problem as having an initial state, possible actions or operators that change states, a goal test to determine if a state is the goal state, and a path cost function. Search algorithms take a problem as input and systematically examine states by considering sequences of actions to find a lowest cost path from the start to a goal state. Different search techniques include methods that find any solution path, the lowest cost path, or methods that consider an opponent. Examples of search problems discussed are finding parking, the vacuum world problem, and the eight puzzle problem.
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMAvay Minni
This document describes using a genetic algorithm to solve resource constrained project scheduling problems. It begins with an introduction explaining that planning and scheduling projects involves managing many possible solutions and resource allocations. It then provides sections on genetic algorithms, the basic genetic algorithm process, and why genetic algorithms are suitable for this type of optimization problem. The document outlines the general formulation of resource constrained project scheduling as a linear programming problem and provides an example problem scenario. It includes flowcharts and discusses implementing the proposed genetic algorithm solution methodology.
Reinforcement learning is a computational approach for learning through interaction without an explicit teacher. An agent takes actions in various states and receives rewards, allowing it to learn relationships between situations and optimal actions. The goal is to learn a policy that maximizes long-term rewards by balancing exploitation of current knowledge with exploration of new actions. Methods like Q-learning use value function approximation and experience replay in deep neural networks to scale to complex problems with large state spaces like video games. Temporal difference learning combines the advantages of Monte Carlo and dynamic programming by bootstrapping values from current estimates rather than waiting for full episodes.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
06-20-2024-AI Camp Meetup-Unstructured Data and Vector DatabasesTimothy Spann
Tech Talk: Unstructured Data and Vector Databases
Speaker: Tim Spann (Zilliz)
Abstract: In this session, I will discuss the unstructured data and the world of vector databases, we will see how they different from traditional databases. In which cases you need one and in which you probably don’t. I will also go over Similarity Search, where do you get vectors from and an example of a Vector Database Architecture. Wrapping up with an overview of Milvus.
Introduction
Unstructured data, vector databases, traditional databases, similarity search
Vectors
Where, What, How, Why Vectors? We’ll cover a Vector Database Architecture
Introducing Milvus
What drives Milvus' Emergence as the most widely adopted vector database
Hi Unstructured Data Friends!
I hope this video had all the unstructured data processing, AI and Vector Database demo you needed for now. If not, there’s a ton more linked below.
My source code is available here
https://github.com/tspannhw/
Let me know in the comments if you liked what you saw, how I can improve and what should I show next? Thanks, hope to see you soon at a Meetup in Princeton, Philadelphia, New York City or here in the Youtube Matrix.
Get Milvused!
https://milvus.io/
Read my Newsletter every week!
https://github.com/tspannhw/FLiPStackWeekly/blob/main/141-10June2024.md
For more cool Unstructured Data, AI and Vector Database videos check out the Milvus vector database videos here
https://www.youtube.com/@MilvusVectorDatabase/videos
Unstructured Data Meetups -
https://www.meetup.com/unstructured-data-meetup-new-york/
https://lu.ma/calendar/manage/cal-VNT79trvj0jS8S7
https://www.meetup.com/pro/unstructureddata/
https://zilliz.com/community/unstructured-data-meetup
https://zilliz.com/event
Twitter/X: https://x.com/milvusio https://x.com/paasdev
LinkedIn: https://www.linkedin.com/company/zilliz/ https://www.linkedin.com/in/timothyspann/
GitHub: https://github.com/milvus-io/milvus https://github.com/tspannhw
Invitation to join Discord: https://discord.com/invite/FjCMmaJng6
Blogs: https://milvusio.medium.com/ https://www.opensourcevectordb.cloud/ https://medium.com/@tspann
https://www.meetup.com/unstructured-data-meetup-new-york/events/301383476/?slug=unstructured-data-meetup-new-york&eventId=301383476
https://www.aicamp.ai/event/eventdetails/W2024062014
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Discover the cutting-edge telemetry solution implemented for Alan Wake 2 by Remedy Entertainment in collaboration with AWS. This comprehensive presentation dives into our objectives, detailing how we utilized advanced analytics to drive gameplay improvements and player engagement.
Key highlights include:
Primary Goals: Implementing gameplay and technical telemetry to capture detailed player behavior and game performance data, fostering data-driven decision-making.
Tech Stack: Leveraging AWS services such as EKS for hosting, WAF for security, Karpenter for instance optimization, S3 for data storage, and OpenTelemetry Collector for data collection. EventBridge and Lambda were used for data compression, while Glue ETL and Athena facilitated data transformation and preparation.
Data Utilization: Transforming raw data into actionable insights with technologies like Glue ETL (PySpark scripts), Glue Crawler, and Athena, culminating in detailed visualizations with Tableau.
Achievements: Successfully managing 700 million to 1 billion events per month at a cost-effective rate, with significant savings compared to commercial solutions. This approach has enabled simplified scaling and substantial improvements in game design, reducing player churn through targeted adjustments.
Community Engagement: Enhanced ability to engage with player communities by leveraging precise data insights, despite having a small community management team.
This presentation is an invaluable resource for professionals in game development, data analytics, and cloud computing, offering insights into how telemetry and analytics can revolutionize player experience and game performance optimization.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
2. 2
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
What is Planning (in AI)?
•now: what do we mean by (AI) planning?
•A Conceptual Model for Planning
•Planning and Search
•Example Problems
3. 3
Human Planning and Acting
• acting without (explicit) planning:
– when purpose is immediate
– when performing well-trained behaviours
– when course of action can be freely adapted
• acting after planning:
– when addressing a new situation
– when tasks are complex
– when the environment imposes high risk/cost
– when collaborating with others
• people plan only when strictly necessary
Human Planning and Acting
•humans rarely plan before acting in everyday situations
•acting without (explicit) planning: (may be subconscious)
•when purpose is immediate (e.g. switch on computer)
•when performing well-trained behaviours (e.g. drive car)
•when course of action can be freely adapted (e.g. shopping)
•acting after planning:
•when addressing a new situation (e.g. move house)
•when tasks are complex (e.g. plan this course)
•when the environment imposes high risk/cost (e.g. manage nuclear power
station)
•when collaborating with others (e.g. build house)
•people plan only when strictly necessary
•because planning is complicated and time-consuming (trade-off: cost vs.
benefit)
•often we seek only good rather than optimal plans
4. 4
Defining AI Planning
• planning:
– explicit deliberation process that chooses and organizes
actions by anticipating their outcomes
– aims at achieving some pre-stated objectives
• AI planning:
– computational study of this deliberation process
Defining AI Planning
•planning:
•explicit deliberation process that chooses and organizes actions by
anticipating their outcomes
•in short: planning is reasoning about actions
•aims at achieving some pre-stated objectives
•or: achieving them as best as possible (planning as optimization
problem)
•AI planning:
•computational study of this deliberation process
5. 5
Why Study Planning in AI?
• scientific goal of AI:
understand intelligence
– planning is an important component of rational
(intelligent) behaviour
• engineering goal of AI:
build intelligent entities
– build planning software for choosing and
organizing actions for autonomous intelligent
machines
Why Study Planning in AI?
•scientific goal of AI: understand intelligence
•planning is an important component of rational (intelligent) behaviour
•planning is part of intelligent behaviour
•engineering goal of AI: build intelligent entities
•build planning software for choosing and organizing actions for
autonomous intelligent machines
•example: Mars explorer (cannot be remotely operated)
•robot: Shakey, SRI 1968
6. 6
Domain-Specific vs.
Domain-Independent Planning
• domain-specific planning: use specific representations and
techniques adapted to each problem
– important domains: path and motion planning, perception planning,
manipulation planning, communication planning
• domain-independent planning: use generic representations and
techniques
– exploit commonalities to all forms of planning
– leads to general understanding of planning
• domain-independent planning complements domain-specific
planning
Domain-Specific vs. Domain-Independent Planning
•domain-specific planning: use specific representations and techniques
adapted to each problem
•important domains: path and motion planning, perception planning,
manipulation planning, communication planning
•domain-independent planning: use generic representations and techniques
•exploit commonalities to all forms of planning
•saves effort; no need to reinvent same techniques for different
problems
•leads to general understanding of planning
•contributes to scientific goal of AI
•domain-independent planning complements domain-specific planning
•use domain-dependent planning where highly efficient solution is required
7. Quiz: True or False?
• people only plan when they have to because the benefit of an optimal
plan does not always justify the effort of planning
• for humans, planning is a subconscious process, which is why
computational planning is so hard
• planning involves a mental simulation of actions to foresee future world
states and compare them to goals
• in Artificial Intelligence, planning is concerned with the search for
computationally optimal plans
• domain-specific planning is used when efficiency is vital, whereas
domain-independent planning is good for planning from first principles
7
8. 8
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•just done: what do we mean by (AI) planning?
A Conceptual Model for Planning
•now: state-transition systems – formalizing the problem
•Planning and Search
•Example Problems
9. 9
Why a Conceptual Model?
• conceptual model: theoretical device for describing the
elements of a problem
• good for:
– explaining basic concepts
– clarifying assumptions
– analyzing requirements
– proving semantic properties
• not good for:
– efficient algorithms and computational concerns
Why a Conceptual Model?
•conceptual model: theoretical device for describing the elements of a problem
•good for:
•explaining basic concepts: what are the objects to be manipulated during
problem-solving?
•clarifying assumptions: what are the constraints imposed by this model?
•analyzing requirements: what representations do we need for the objects?
•proving semantic properties: when is an algorithm sound or complete?
•not good for:
•efficient algorithms and computational concerns
10. 10
Conceptual Model for Planning:
State-Transition Systems
• A state-transition system is a 4-tuple Σ = (S,A,E,γ), where:
– S = {s1,s2,…} is a finite or recursively enumerable set of states;
– A = {a1,a2,…} is a finite or recursively enumerable set of actions;
– E = {e1,e2,…} is a finite or recursively enumerable set of events;
and
– γ: S×(A∪E)→2S is a state transition function.
• if a∈A and γ(s,a) ≠ ∅ then a is applicable in s
• applying a in s will take the system to s′∈γ(s,a)
Conceptual Model for Planning: State-Transition Systems
•A state-transition system is a 4-tuple Σ=(S,A,E,γ), where:
•a general model for a dynamic system, common to other areas of computer
science; aka. dynamic-event system
•S = {s1,s2,…} is a finite or recursively enumerable set of states;
•the possible states the world can be in
•A = {a1,a2,…} is a finite or recursively enumerable set of actions;
•the actions that can be performed by some agent in the world,
transitions are controlled by the plan executor
•E = {e1,e2,…} is a finite or recursively enumerable set of events; and
•the events that can occur in the world, transitions that are contingent
(correspond to the internal dynamics of the system)
•γ: S×(A∪E)→2S
is a state transition function.
•notation: 2S
=powerset of S; maps to a set of states
•the function describing how the world evolves when actions or events
occur
•note: model does not allow for parallelism between actions and/or events
•if a∈A and γ(s,a) ≠ ∅ then a is applicable in s
•applying a in s will take the system to s′∈γ(s,a)
11. 11
State-Transition Systems as Graphs
• A state-transition system Σ = (S,A,E,γ) can be
represented by a directed labelled graph G = (NG,EG)
where:
– the nodes correspond to the states in S, i.e. NG=S; and
– there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label
u∈(A∪E) if and only if s′∈γ(s,u).
State-Transition Systems as Graphs
•A state-transition system Σ=(S,A,E,γ) can be represented by a directed labelled
graph G=(NG,EG) where:
•the nodes correspond to the states in S, i.e. NG=S; and
•nodes correspond to world states
•there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label a∈(A∪E) if and
only if s′∈γ(s,a).
•there is an arc if there is an action or event that transforms one state
into the other (called a state transition)
•the label of that arc is that action or event
12. Toy Problem:
Missionaries and Cannibals
On one bank of a river are three
missionaries (black triangles) and
three cannibals (red circles). There is
one boat available that can hold up
to two people and that they would like to use to
cross the river. If the cannibals ever outnumber the missionaries on
either of the river’s banks, the missionaries will get eaten.
How can the boat be used to safely carry all the missionaries and
cannibals across the river?
Toy Problem: Missionaries and Cannibals
•On one bank of a river are three missionaries (black triangles) and three
cannibals (red circles).
•description of initial state and graphical representation; other states must
follow the same pattern
•There is one boat available that can hold up to two people and that they would
like to use to cross the river.
•more state description: boat; implicitly: description of possible action: cross
river using boat
•If the cannibals ever outnumber the missionaries on either of the river’s
banks, the missionaries will get eaten.
•constraint on all states; must not be violated
•How can the boat be used to safely carry all the missionaries and cannibals
across the river?
•problem: find a sequence of action that brings about another state
13. 13
State-Transition Graph Example:
Missionaries and Cannibals
1c
1m
1c
2c
1c
2c
1c
2m
1m
1c
1m
1c
1c
2c
1m
2m
1c
2c
1c
1m
State-Transition Graph Example: Missionaries and Cannibals
•On one bank of a river are three missionaries (black triangles) and three cannibals
(red circles). There is one boat available that can hold up to two people and that they
would like to use to cross the river. If the cannibals ever outnumber the missionaries
on either of the river’s banks, the missionaries will get eaten. How can the boat be
used to safely carry all the missionaries and cannibals across the river?
14. 14
Objectives and Plans
• state-transition system:
– describes all ways in which a system may evolve
• plan:
– a structure that gives appropriate actions to apply in order to achieve
some objective when starting from a given state
• types of objective:
– goal state sg or set of goal states Sg
– satisfy some conditions over the sequence of states
– optimize utility function attached to states
– task to be performed
Objectives and Plans
•state-transition system:
•describes all ways in which a system may evolve
•plan:
•a structure that gives appropriate actions to apply in order to achieve
some objective when starting from a given state
•structure: e.g. sequential list of actions to be performed in order;
function mapping states to actions
•describes a path through the state-transition graph
•types of objective:
•goal state sg or set of goal states Sg
•simplest case; objective achieved by sequence of transitions that ends
in one of the goal states
•satisfy some conditions over the sequence of states
•example: states to be avoided or visited (goal state is special case)
•optimize utility function attached to states
•optimize compound function (sum, max) of utilities of visited states
•task to be performed
•alternative approach: recursively defined set of actions and (other)
tasks
15. 15
Planning and Plan Execution
• planner:
– given: description of Σ, initial state,
objective
– generate: plan that achieves objective
• controller:
– given: plan, current state (observation
function: η:S→O)
– generate: action
• state-transition system:
– evolves as actions are executed and
events occur
Planner
Controller
System Σ
Initial State
Objectives
Description of Σ
Events
Plan
ActionsObservations
Planning and Plan Execution
•planner:
•given: description of Σ, initial state, objective
•generate: plan that achieves objective
•planner works offline: relies on formal description of Σ
•controller:
•given: plan, current state (observation function: η:S→O)
•partial knowledge of controller about world modelled through
observation function
•O = set of possible observations (e.g. subset); input for controller
•generate: action
•controller works online: along with the dynamics of Σ
•state-transition system:
•evolves as actions are executed and events occur
16. Dynamic Planning
• problem: real world differs from model
described by Σ
• more realistic model: interleaved planning
and execution
– plan supervision
– plan revision
– re-planning
• dynamic planning: closed loop between
planner and controller
– execution status
Planner
Controller
System Σ
Initial State
Objectives
Description of Σ
Events
Plans
ActionsObservations
Execution Status
Dynamic Planning
•problem: physical system differs from model described by Σ
•planner only has access to model (description of Σ)
•controller must cope with differences between Σ and real world
•more realistic model: interleaved planning and execution
•plan supervision: detect when observations differ from expected results
•plan revision: adapt existing plan to new circumstances
•re-planning: generate a new plan from current (initial) state
•dynamic planning: closed loop between planner and controller
•execution status
17. 17
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•just done: state-transition systems – formalizing the problem
Planning and Search
•now: problem formulation and search problems
•Example Problems
18. Toy Problems vs.
Real-World Problems
Toy Problems/Puzzles
– concise and exact description
– used for illustration purposes
(e.g. here)
– used for performance
comparisons
Real-World Problems
– no single, agreed-upon
description
– people care about the
solutions
Toy Problems vs. Real-World Problems
•Toy Problems/Puzzles
•concise and exact description
•used for illustration purposes (e.g. here)
•used for performance comparisons
•toy problems combine
•richness: they are anything but trivial to solve
•simplicity: they are described easily and precisely
•Real-World Problems
•no single, agreed-upon description
•people care about the solutions
•most real-world problems would take hours to define
19. Search Problems
• initial state
• set of possible actions/applicability conditions
– successor function: state set of <action, state>
– successor function + initial state = state space
– path (solution)
• goal
– goal state or goal test function
• path cost function
– for optimality
– assumption: path cost = sum of step costs
Search Problems
•initial state: current state the world is in (state = situation)
•states: symbol structures representing real world objects and relations
physical symbols systems
•finite set of possible actions (aka. operators or production rules (problem
formulation))/applicability conditions
•successor function: state set of <action, state>: action is applicable in
given state; result of applying action in given state is paired state
•successor function + initial state = state space: directed graph with states
as nodes and actions as arcs
•path (in the graph) (solution)
•goal (goal formulation)
•goal state (for unique goal state) or goal test function (for multiple goal
states (e.g. in chess))
•Solution: path in state space from initial state to goal state
•path cost function
•for optimality: find solution path with minimal path cost
•assumption: path cost = sum of step costs (cost of applying a given action
in a given state)
20. Missionaries and Cannibals: Successor
Function
{<1m1c, (L:3m,3c,b-R:0m,0c)>,
<1m, (L:3m,2c,b-R:0m,1c)>}
(L:2m,2c-R:1m,1c,b)
{<2c, (L:3m,3c,b-R:0m,0c)>,
<1c, (L:3m,2c,b-R:0m,1c)>}
(L:3m,1c-R:0m,2c,b)
{<2c, (L:3m,1c-R:0m,2c,b)>,
<1m1c, (L:2m,2c-R:1m,1c,b)>,
<1c, (L:3m,2c-R:0m,1c,b)>}
(L:3m,3c,b-R:0m,0c)
set of <action, state>state
Missionaries and Cannibals: Successor Function
•state set of <action, state> (domain and range: set of pairs)
•(L:3m,3c,b-R:0m,0c) {<2c, (L:3m,1c-R:0m,2c,b)>, <1m1c, (L:2m,2c-
R:1m,1c,b)>, <1c, (L:3m,2c-R:0m,1c,b)>}
•states:
•L/R: on left/right bank
•m/c: missionaries/cannibals (example: 3 missionaries and three
cannibals on left bank, none on right bank)
•b: boat (example: boat on left bank)
•actions:
•m/c: missionaries/cannibals crossing (example(s): 2 cannibals crossing
(L to R), 1m and 1c crossing; 1c crossing)
•(L:3m,1c-R:0m,2c,b) {<2c, (L:3m,3c,b-R:0m,0c)>, <1c, (L:3m,2c,b-R:0m,1c)>}
(note: only two actions applicable)
•(L:2m,2c-R:1m,1c,b) {<1m1c, (L:3m,3c,b-R:0m,0c)>, <1m, (L:3m,2c,b-
R:0m,1c)>}
21. Real-World Problem:
Touring in Romania
Oradea
Bucharest
Fagaras
Pitesti
Neamt
Iasi
Vaslui
Urziceni
Hirsova
Eforie
Giurgiu
Craiova
Rimnicu Vilcea
Sibiu
Dobreta
Mehadia
Lugoj
Timisoara
Arad
Zerind
120
140
151
75
70
111
118
75
71
85
90
211
101
97
138
146
80
99
87
92
142
98
86
Real-World Problem: Touring in Romania
•shown: rough map of Romania
•initial state: on vacation in Arad, Romania
•goal? actions? -- “Touring Romania” cannot readily be described in terms of possible
actions, goals, and path cost
22. Problem Formulation
• problem formulation
– process of deciding what actions and states to consider
– granularity/abstraction level
• assumptions about the
environment:
– finite and discrete
– fully observable
– deterministic
– static
• other assumptions:
– restricted goals
– sequential plans
– implicit time
– offline planning
Problem Formulation
•problem formulation
•process of deciding what actions and states to consider
•granularity/abstraction level
•TR example:
•possible actions: turn steering wheel by one degree left/right; move left
foot … too much (irrelevant) detail, uncertainty, too many steps in
solution
•possible action: drive to Bucharest How to execute such an action?
•best level of granularity: drive to another major town to which there is a
direct road
•assumptions about the environment:
•finite and discrete: we can enumerate all the possible actions and states
(else: continuous)
•fully observable: agent can see what current state it is in
•deterministic: applying an action in a given state leads to exactly one
(usually different) state
•static: environment does not change while we think (no events)
•other assumptions:
•restricted goals: given as an explicit goal state sg or set of goal states Sg
•sequential plans: solution plan is a linearly ordered finite sequence of
actions
•implicit time: actions and events have no duration
•offline planning: planner is not concerned with changes of Σ while planning
23. Search Nodes
• search nodes: the nodes in the search tree
• data structure:
– state: a state in the state space
– parent node: the immediate predecessor in the search tree
– action: the action that, performed in the parent node’s state, leads to
this node’s state
– path cost: the total cost of the path leading to this node
– depth: the depth of this node in the search tree
Search Nodes
•search nodes: the nodes in the search tree
•node is a bookkeeping structure in a search tree
•data structure:
•state: a state in the state space
•state (vs. node) corresponds to a configuration of the world
•two nodes may contain equal states
•parent node: the immediate predecessor in the search tree
•nodes are on paths (defined by parent nodes)
•action: the action that, performed in the parent node’s state, leads to
this node’s state
•path cost: the total cost of the path leading to this node
•depth: the depth of this node in the search tree
•alternative: representing paths only (sequences of actions):
•possible, but state provides direct access to valuable information that might
be expensive to regenerate all the time
24. General Tree Search Algorithm
function treeSearch(problem, strategy)
fringe { new searchNode(problem.initialState) }
loop
if empty(fringe) then return failure
node selectFrom(fringe, strategy)
if problem.goalTest(node.state) then
return pathTo(node)
fringe fringe + expand(problem, node)
General Tree Search Algorithm
function treeSearch(problem, strategy)
•find a solution to the given problem while expanding nodes according to the given
strategy
fringe { new searchNode(problem.initialState) }
•fringe: set of known states; initially just initial state
loop
•possibly infinite loop expands nodes
if empty(fringe) then return failure
•complete tree explored; no goal state found
node selectFrom(fringe, strategy)
•select node from fringe according to search control strategy; the node will not
be selected again
if problem.goalTest(node.state) then
•goal test before expansion: to avoid trick problem like “get from Arad to Arad”
return pathTo(node)
•success: goal node found
fringe fringe + expand(problem, node)
•otherwise: add new nodes to the fringe and continue loop
25. Search (Control) Strategy
• search or control strategy: an effective method for
scheduling the application of the successor function to
expand nodes
– selects the next node to be expanded from the fringe
– determines the order in which nodes are expanded
– aim: produce a goal state as quickly as possible
• examples:
– LIFO/FIFO-queue for fringe nodes
– alphabetical ordering
Search (Control) Strategy
•search or control strategy: an effective method for scheduling the application
of the successor function to expand nodes
•removes non-determinism from search method
•selects the next node to be expanded from the fringe
•closed nodes never need to be expanded again
•determines the order in which nodes are expanded
•exact order makes method deterministic
•aim: produce a goal state as quickly as possible
•strategy that produces goal state quicker is usually considered better
•examples:
•LIFO/FIFO-queue for fringe nodes (two fundamental search strategies)
•alphabetical ordering
•remark: complete search tree is usually too large to fit into memory, strategy
determines which part to generate
26. 26
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•Planning and Search
•just done: problem formulation and search problems
Example Problems
•now: more examples incl. DWR problem
27. Toy Problem:
Sliding-Block Puzzle
The 8-Puzzle
876
543
21
132
75
6
4
8
Toy Problem: Sliding-Block Puzzle
•[images]
•The 8-Puzzle
•problem description:
•states: location of each tile and the blank (9!/2 = 181440 states)
•initial state: any (reachable) state, e.g. one shown on left (worst case: solution
requires at least 31 steps)
•actions: good formulation: move blank left/right/up/down
•goal test: goal state (shown right)
•step cost: 1
•generalization: sliding-tile puzzles, e.g. 15-puzzle, 24-puzzle
•problem class is NP-complete
28. Toy Problem: N-Queens Problem
Place n queens on an
n by n chess board
such that none of the
queens attacks any of
the others.
Toy Problem: N-Queens Problem
•Place n queens on an n by n chess board such that none of the queens
attacks any of the others.
•problem: no row, column, or diagonal must contain more than one queen
•path cost irrelevant; looking for goal state
•configuration shown is not a goal state
29. 29
The Dock-Worker Robots (DWR)
Domain
• aim: have one example to illustrate
planning procedures and techniques
• informal description:
– harbour with several locations (docks),
docked ships, storage areas for
containers, and parking areas for trucks
and trains
– cranes to load and unload ships etc., and
robot carts to move containers around
The Dock-Worker Robots (DWR) Domain
•aim: have one example to illustrate planning procedures and techniques
•problem must be nontrivial to be interesting, but not too much overhead to
introduce problem
•informal description:
•generalization of earlier example (state transition graph)
•harbour with several locations (docks), docked ships, storage areas for
containers, and parking areas for trucks and trains
•cranes to load and unload ships etc., and robot carts to move
containers around
•simplified and enriched version of this domain will be introduced later
•approach: use first-order predicate logic as representation
30. l1 l2
DWR Example State
k1
ca
k2
cb
cc
cd
ce
cf
robot
crane
location
pile (p1 and q1)
container
pile (p2 and q2, both empty)
container
pallet
r1
31. 31
Actions in the DWR Domain
• move robot r from location l to some adjacent and unoccupied
location l’
• take container c with empty crane k from the top of pile p, all
located at the same location l
• put down container c held by crane k on top of pile p, all
located at location l
• load container c held by crane k onto unloaded robot r, all
located at location l
• unload container c with empty crane k from loaded robot r,
all located at location l
Actions in the DWR Domain
•move robot r from location l to some adjacent and unoccupied location l’
•take container c with empty crane k from the top of pile p, all located at the
same location l
•put down container c held by crane k on top of pile p, all located at location l
•load container c held by crane k onto unloaded robot r, all located at location l
•unload container c with empty crane k from loaded robot r, all located at
location l
•formal specifications will follow when we have introduced a formal action description
language
•problem: how to represent actions formally? first-order logic?
32. 32
s0
State-Transition Systems: Graph
Example
location1 location2
pallet
cont.
crane
s2
location1 location2
pallet
cont.
crane
s1
location1 location2
pallet
cont.
crane
s3
location1 location2
pallet
cont.
crane
s4
location1 location2
pallet
crane
robot robot
robot
robot
robot
cont.
s5
location1 location2
pallet
crane
robot
cont.
take put
move1
move2
move2
move1
take put
load
unload
move2move1
State-Transition Systems: Graph Example
•states: s0 to s5
•objects: robot, crane, container, pallet, two locations
•actions:
•crane can take/put the container from the pallet/onto the pallet
•crane can load/unload the container from the robot
•robot can drive to either location (with or without the container loaded)
•no events
•state transition function: arcs shown in graph
•note: state transitions are deterministic: each action leads to at most one other state
33. 33
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•Planning and Search
•Example Problems
•just done : more examples incl. DWR problem