The document discusses teaching a CP (constraint programming) module. It outlines the context, structure, and content of the module. It describes how lectures will cover CP theory, modeling problems, and using Choco solver. It also discusses assessed exercises, including a Sudoku problem and a choice of modeling assignments. The goal is to convey both theory and practice of CP through lectures, exercises, and interactive in-class modeling and solving.
This document summarizes Xavier Gibert Serra's Ph.D. examination on anomaly detection in noisy images. It introduces the anomaly detection problem and challenges, such as dealing with nuisance parameters and learning from weakly labeled data. It then describes methods for anomaly detection on textured images using shearlets and an iterative shrinkage algorithm. Next, it discusses using image dictionaries for anomaly detection with a 3-way max-margin formulation to classify images as normal, broken, or missing. The document outlines GPU acceleration of shearlet transforms and classification results.
Machine Learning can often be a daunting subject to tackle much less utilize in a meaningful manner. In this session, attendees will learn how to take their existing data, shape it, and create models that automatically can make principled business decisions directly in their applications. The discussion will include explanations of the data acquisition and shaping process. Additionally, attendees will learn the basics of machine learning - primarily the supervised learning problem.
This document discusses multimodal learning analytics (MLA), which examines learning through multiple modalities like video, audio, digital pens, etc. It provides examples of extracting features from these modalities to analyze problem solving, expertise levels, and presentation quality. Key challenges of MLA are integrating different modalities and developing tools to capture real-world learning outside online systems. While current accuracy is limited, MLA is an emerging field that could provide insights beyond traditional learning analytics.
Алексей Ященко и Ярослав Волощук "False simplicity of front-end applications"Fwdays
It’s easy to underestimate a front-end project's complexity, which leads to shallow and thus incorrect implementation. Attempts to fix this problem result in uncontrolled complexity growth and undefined behavior in corner cases.
We'll discuss ways of revealing the inherent complexity of a problem and dealing with it both on theoretical and practical levels.
This document provides an overview of a course on algorithms and data structures. It outlines the course topics that will be covered over 15 weeks of lectures. These include data types, arrays, matrices, pointers, linked lists, stacks, queues, trees, graphs, sorting, and searching algorithms. Evaluation will be based on assignments, quizzes, projects, sessionals, and a final exam. The goal is for students to understand different algorithm techniques, apply suitable data structures to problems, and gain experience with classical algorithm problems.
This document provides an overview of machine learning concepts including:
1. It defines data science and machine learning, distinguishing machine learning's focus on letting systems learn from data rather than being explicitly programmed.
2. It describes the two main areas of machine learning - supervised learning which uses labeled examples to predict outcomes, and unsupervised learning which finds patterns in unlabeled data.
3. It outlines the typical machine learning process of obtaining data, cleaning and transforming it, applying mathematical models, and using the resulting models to make predictions. Popular models like decision trees, neural networks, and support vector machines are also briefly introduced.
This document summarizes Xavier Gibert Serra's Ph.D. examination on anomaly detection in noisy images. It introduces the anomaly detection problem and challenges, such as dealing with nuisance parameters and learning from weakly labeled data. It then describes methods for anomaly detection on textured images using shearlets and an iterative shrinkage algorithm. Next, it discusses using image dictionaries for anomaly detection with a 3-way max-margin formulation to classify images as normal, broken, or missing. The document outlines GPU acceleration of shearlet transforms and classification results.
Machine Learning can often be a daunting subject to tackle much less utilize in a meaningful manner. In this session, attendees will learn how to take their existing data, shape it, and create models that automatically can make principled business decisions directly in their applications. The discussion will include explanations of the data acquisition and shaping process. Additionally, attendees will learn the basics of machine learning - primarily the supervised learning problem.
This document discusses multimodal learning analytics (MLA), which examines learning through multiple modalities like video, audio, digital pens, etc. It provides examples of extracting features from these modalities to analyze problem solving, expertise levels, and presentation quality. Key challenges of MLA are integrating different modalities and developing tools to capture real-world learning outside online systems. While current accuracy is limited, MLA is an emerging field that could provide insights beyond traditional learning analytics.
Алексей Ященко и Ярослав Волощук "False simplicity of front-end applications"Fwdays
It’s easy to underestimate a front-end project's complexity, which leads to shallow and thus incorrect implementation. Attempts to fix this problem result in uncontrolled complexity growth and undefined behavior in corner cases.
We'll discuss ways of revealing the inherent complexity of a problem and dealing with it both on theoretical and practical levels.
This document provides an overview of a course on algorithms and data structures. It outlines the course topics that will be covered over 15 weeks of lectures. These include data types, arrays, matrices, pointers, linked lists, stacks, queues, trees, graphs, sorting, and searching algorithms. Evaluation will be based on assignments, quizzes, projects, sessionals, and a final exam. The goal is for students to understand different algorithm techniques, apply suitable data structures to problems, and gain experience with classical algorithm problems.
This document provides an overview of machine learning concepts including:
1. It defines data science and machine learning, distinguishing machine learning's focus on letting systems learn from data rather than being explicitly programmed.
2. It describes the two main areas of machine learning - supervised learning which uses labeled examples to predict outcomes, and unsupervised learning which finds patterns in unlabeled data.
3. It outlines the typical machine learning process of obtaining data, cleaning and transforming it, applying mathematical models, and using the resulting models to make predictions. Popular models like decision trees, neural networks, and support vector machines are also briefly introduced.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. The document outlines different types of machine learning including supervised learning (using labeled data), unsupervised learning (using only unlabeled data), and reinforcement learning (where an agent takes actions and receives rewards or punishments). It provides examples of classification problems and discusses decision tree learning as a supervised learning method, including how decision trees are constructed and potential issues like overfitting.
This document discusses multimodal learning analytics (MLA), which examines learning through multiple modalities like video, audio, digital pens, etc. It provides examples of extracting features from these modalities to analyze problem-solving sessions. Video features like total movement, distance from table, and calculator tracking are described. Audio features like speech duration and word counts are mentioned. Digital pen features like strokes, pressure, and shapes are examined. The document concludes that MLA has much potential to explore learning in more realistic settings compared to traditional learning analytics.
This document discusses machine learning and various machine learning techniques. It begins by defining learning and different types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then focuses on supervised learning, discussing important concepts like training and test sets. Decision trees are presented as a popular supervised learning technique, including how they are constructed using a top-down recursive approach that chooses attributes to best split the data based on measures like information gain. Overfitting is also discussed as an issue to address with techniques like pruning.
The document discusses machine learning and various machine learning concepts. It defines learning as improving performance through experience. Machine learning involves using data to acquire models and learn hidden concepts. The main areas covered are supervised learning (data with labels), unsupervised learning (data without labels), semi-supervised learning (some labels present), and reinforcement learning (agent takes actions and receives rewards/punishments). Decision trees are presented as a way to represent hypotheses learned through examples, with attributes used to recursively split data into partitions.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. It describes supervised learning where data and labels are provided, unsupervised learning where only data is given, and reinforcement learning where an agent takes actions and receives rewards or punishments. Decision tree learning is discussed as a supervised learning method where trees are constructed by recursively splitting data based on attribute tests that optimize criteria like information gain. Overfitting and techniques like pruning are addressed to improve generalization.
This document provides an overview of the CS760 Machine Learning course taught by David Page at the University of Wisconsin. The course will cover a broad survey of machine learning algorithms and applications over 30 class meetings. Topics will include both theoretical and practical aspects of supervised learning algorithms like naive Bayes, decision trees, neural networks, and support vector machines. Students will complete programming homework assignments applying various machine learning algorithms and a midterm exam. The primary goals of the course are to understand what learning systems should do and how existing systems work.
This document outlines the syllabus for a machine learning course. It introduces the instructor, teaching assistant, required textbook, and meeting schedule. It describes the course style as primarily algorithmic and experimental, covering many ML subfields. The goals are to understand what a learning system should do and how existing systems work. Background knowledge in languages, AI topics, and math is assumed, but no prior ML experience is needed. Requirements include biweekly programming homework, a midterm exam, and a final project. Grading will be based on homework, exam, project, and discussion participation. Policies on late homework and academic misconduct are also provided.
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...huguk
The task of “data profiling”—assessing the overall content and quality of a data set—is a core aspect of the analytic experience. Traditionally, profiling was a fairly cut-and-dried task: load the raw numbers into a stat package, run some basic descriptive statistics, and report the output in a summary file or perhaps a simple data visualization. However, data volumes can be so large today that traditional tools and methods for computing descriptive statistics become intractable; even with scalable infrastructure like Hadoop, aggressive optimization and statistical approximation techniques must be used. In this talk Sean will cover technical challenges in keeping data profiling agile in the Big Data era. He will discuss both research results and real-world best practices used by analysts in the field, including methods for sampling, summarizing and sketching data, and the pros and cons of using these various approaches.
Sean is Trifacta’s Chief Technical Officer. He completed his Ph.D. at Stanford University, where his research focused on user interfaces for database systems. At Stanford, Sean led development of new tools for data transformation and discovery, such as Data Wrangler. He previously worked as a data analyst at Citadel Investment Group.
This is a slide deck from a presentation, that my colleague Shirin Glander (https://www.slideshare.net/ShirinGlander/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, I just copied the two slide decks together. As I did the "surrounding" part, I added Shirin's part at the place when she took over and then added my concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This document provides an overview of machine learning concepts including:
1. It discusses different machine learning applications such as predicting age, identifying similar faces, and decision trees.
2. It covers machine learning issues like overfitting, the importance of generalization to new examples, and using a test set for evaluation.
3. It introduces neural networks including the basic structure of a perceptron, multi-layer perceptrons, backpropagation for training, and why deeper networks can be more powerful than shallow ones.
The document discusses data-oriented design principles for game engine development in C++. It emphasizes understanding how data is represented and used to solve problems, rather than focusing on writing code. It provides examples of how restructuring code to better utilize data locality and cache lines can significantly improve performance by reducing cache misses. Booleans packed into structures are identified as having extremely low information density, wasting cache space.
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
This is the documentation of the study-meeting in lab.
Tha book title is "Hands-On Machine Learning with Scikit-Learn and TensorFlow" and this is the chapter 8.
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
The document describes solving linear programming problems graphically. It provides an example maximization problem with an objective of maximizing Z=30x1 + 40x2 subject to three constraints. Graphically, the feasible region satisfying all constraints is determined by plotting the points where each constraint equation is equal to 0 for x1 and x2, and shading the correct side of the inequality sign. The optimal solution that maximizes Z can then be found within the feasible region on the graph.
Writing Machine Learning code is now possible with .NET native library ML.NET that has recently reached 1.0 milestole. Let's look what we can do with this lib, which scenarios can be handled.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. The document outlines different types of machine learning including supervised learning (using labeled data), unsupervised learning (using only unlabeled data), and reinforcement learning (where an agent takes actions and receives rewards or punishments). It provides examples of classification problems and discusses decision tree learning as a supervised learning method, including how decision trees are constructed and potential issues like overfitting.
This document discusses multimodal learning analytics (MLA), which examines learning through multiple modalities like video, audio, digital pens, etc. It provides examples of extracting features from these modalities to analyze problem-solving sessions. Video features like total movement, distance from table, and calculator tracking are described. Audio features like speech duration and word counts are mentioned. Digital pen features like strokes, pressure, and shapes are examined. The document concludes that MLA has much potential to explore learning in more realistic settings compared to traditional learning analytics.
This document discusses machine learning and various machine learning techniques. It begins by defining learning and different types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then focuses on supervised learning, discussing important concepts like training and test sets. Decision trees are presented as a popular supervised learning technique, including how they are constructed using a top-down recursive approach that chooses attributes to best split the data based on measures like information gain. Overfitting is also discussed as an issue to address with techniques like pruning.
The document discusses machine learning and various machine learning concepts. It defines learning as improving performance through experience. Machine learning involves using data to acquire models and learn hidden concepts. The main areas covered are supervised learning (data with labels), unsupervised learning (data without labels), semi-supervised learning (some labels present), and reinforcement learning (agent takes actions and receives rewards/punishments). Decision trees are presented as a way to represent hypotheses learned through examples, with attributes used to recursively split data into partitions.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. It describes supervised learning where data and labels are provided, unsupervised learning where only data is given, and reinforcement learning where an agent takes actions and receives rewards or punishments. Decision tree learning is discussed as a supervised learning method where trees are constructed by recursively splitting data based on attribute tests that optimize criteria like information gain. Overfitting and techniques like pruning are addressed to improve generalization.
This document provides an overview of the CS760 Machine Learning course taught by David Page at the University of Wisconsin. The course will cover a broad survey of machine learning algorithms and applications over 30 class meetings. Topics will include both theoretical and practical aspects of supervised learning algorithms like naive Bayes, decision trees, neural networks, and support vector machines. Students will complete programming homework assignments applying various machine learning algorithms and a midterm exam. The primary goals of the course are to understand what learning systems should do and how existing systems work.
This document outlines the syllabus for a machine learning course. It introduces the instructor, teaching assistant, required textbook, and meeting schedule. It describes the course style as primarily algorithmic and experimental, covering many ML subfields. The goals are to understand what a learning system should do and how existing systems work. Background knowledge in languages, AI topics, and math is assumed, but no prior ML experience is needed. Requirements include biweekly programming homework, a midterm exam, and a final project. Grading will be based on homework, exam, project, and discussion participation. Policies on late homework and academic misconduct are also provided.
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...huguk
The task of “data profiling”—assessing the overall content and quality of a data set—is a core aspect of the analytic experience. Traditionally, profiling was a fairly cut-and-dried task: load the raw numbers into a stat package, run some basic descriptive statistics, and report the output in a summary file or perhaps a simple data visualization. However, data volumes can be so large today that traditional tools and methods for computing descriptive statistics become intractable; even with scalable infrastructure like Hadoop, aggressive optimization and statistical approximation techniques must be used. In this talk Sean will cover technical challenges in keeping data profiling agile in the Big Data era. He will discuss both research results and real-world best practices used by analysts in the field, including methods for sampling, summarizing and sketching data, and the pros and cons of using these various approaches.
Sean is Trifacta’s Chief Technical Officer. He completed his Ph.D. at Stanford University, where his research focused on user interfaces for database systems. At Stanford, Sean led development of new tools for data transformation and discovery, such as Data Wrangler. He previously worked as a data analyst at Citadel Investment Group.
This is a slide deck from a presentation, that my colleague Shirin Glander (https://www.slideshare.net/ShirinGlander/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, I just copied the two slide decks together. As I did the "surrounding" part, I added Shirin's part at the place when she took over and then added my concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This document provides an overview of machine learning concepts including:
1. It discusses different machine learning applications such as predicting age, identifying similar faces, and decision trees.
2. It covers machine learning issues like overfitting, the importance of generalization to new examples, and using a test set for evaluation.
3. It introduces neural networks including the basic structure of a perceptron, multi-layer perceptrons, backpropagation for training, and why deeper networks can be more powerful than shallow ones.
The document discusses data-oriented design principles for game engine development in C++. It emphasizes understanding how data is represented and used to solve problems, rather than focusing on writing code. It provides examples of how restructuring code to better utilize data locality and cache lines can significantly improve performance by reducing cache misses. Booleans packed into structures are identified as having extremely low information density, wasting cache space.
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
This is the documentation of the study-meeting in lab.
Tha book title is "Hands-On Machine Learning with Scikit-Learn and TensorFlow" and this is the chapter 8.
Coaching teams in creative problem solvingFlowa Oy
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right.
Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving.
This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
The document describes solving linear programming problems graphically. It provides an example maximization problem with an objective of maximizing Z=30x1 + 40x2 subject to three constraints. Graphically, the feasible region satisfying all constraints is determined by plotting the points where each constraint equation is equal to 0 for x1 and x2, and shading the correct side of the inequality sign. The optimal solution that maximizes Z can then be found within the feasible region on the graph.
Writing Machine Learning code is now possible with .NET native library ML.NET that has recently reached 1.0 milestole. Let's look what we can do with this lib, which scenarios can be handled.
Similar to Teaching Constraint Programming, Patrick Prosser (20)
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
5. Context
• Final Year
• 120 credits (1 credit approx 10 hours)
• 30 credit project
• 4 modules in semester 1
• 4 modules in semester 2
• A module is 10 credits
• 1 compulsory module on Prof Issues
• Students select 4 modules from 8 in each semester
• There is competition between staff for students
• Students make choice after 2nd week
• First lectures have to be as attractive as possible
• CP(M)
• 30 lectures
• 1 short assessed exercise (5%)
• 1 long(ish) assessed exercise (15%)
• Exam (80%)
• Students are motivated by “marks” ☹
31. 1st
observations
&
questions
• How do you know “How not to do it”?
• Why are you using choco (jchoco I call it)?
• Do you steal everything?
• do you tell the students you deal in stolen goods?
• have you no sense of shame?
• You get students to use technology they don’t understand!
• We would NEVER EVER do that!
• You write code in the lecture? Edit compile AND run?
• what if something goes wrong? You’d look stupid, right?
• You put all of the code on the web?
36. Opening up the hood: part 1, search
So how does CP work?
BT: Chronological BackTracking
!
Thrashing!
!
FC: forward checking
!
Thrashing!
!
CBJ: Conflict-directed BackJumping
!
Thrashing!
39. Opening up the hood: part 2, arc-consistency
So how does CP work?
Arc-consistency
Definition (aka 2-consistency)
Covered 1-2-3-Consistency
!
Algorithms
AC3
AC4/6/2001
AC5
Is AC a decision procedure?
41. We modelled some problems
number partitioning
jobshop scheduling
magic square
n-queens
bin packing
knight’s tour
Ramsey number
Orthogonal Latin Squares
Crystal Maze
graph colouring
m-queens
Talent Scheduling
Crossword Puzzle
42. What are the decision variables?
Are there any symmetries?
Is there a variable/value ordering heuristic?
Is there another model?
We modelled some problems
How does search go?
Redundant constraints?
Dual or Hidden Variable encoding?
50. Local search (aka neighbourhood search) Search again
HC, SA, TS, GLS, GA, ACO, …
Why we need them
!!
Problems in using them
incompleteness, move operators,
evaluation/fitness functions,
tuning parameters,
!
Problems in using them in CP
aka meta-heuristics
51. Limited Discrepancy Search Search again
lds
• motivation for LDS
• when might we use it?
• when should we not use it?
59. CP for a design problem
Using CP as an intelligent data base
!
Not solving a problem, but using CP as an “active” representation
!
Explanations via quickXplain
60. theory
• csp <v,c,d> and it’s complexity
• search (bt, fc, cbj, …)
• thrashing
• arc-consistency
• levels of consistency
• heuristics
• local search & lds
• dual & hidden variables
• sat
• phase transition phenomena & constrainedness
summary
61. practice
number partitioning
jobshop scheduling
magic square
Crystal Maze
Talent Scheduling Orthogonal Latin Squares
n-queens
bin packing
knight’s tour
graph colouring
summary Modelling & Solving Problems
62. practice
summary Modelling & Solving Problems
• variables (enumerated/bound/setsVar) and their domains
• constraints (neq, allDiff, ifOnlyIf, …)
• decision variables
• what propagation might take place
• size of the encoding/model (how it scales with problem size)
• heuristics (dynamic/static, variable/value)
• size of the search/state space
• how search goes (example was knight’s tour)
• alternative models (dual? hidden? values as variables?)
• optimisation (with maximise/minimise, seqn of decision problems)
• dealing with conflicts (soft constraints & penalties)
• symmetry breaking (ramsey, bin packing, …)
• redundant constraints (magic square, MOLS …)
• what will make problems hard to solve and what will make them easy?
63. Rhythm
Once
we
are
up
to
speed,
an
ideal
week
goes
like
this
!•
lecture
on
new
theory
• a
new
problem
• Model
and
solve
• “Performance
CP”
• Consider
alternative
models
• Symmetry
breaking
• When
do
problems
get
hard
65. Where are the really hard problems
The
usual
stuff
int
x
=
3;
int
y
=
5;
int
z
=
Math.max(x,y);
66. Where are the really hard problems
The
usual
stuff
int
x
=
3;
int
y
=
5;
int
z
=
Math.max(x,y);
Our
stuff
Model
model
=
new
CPModel();
IntegerVariable
x
=
makeIntVar(“x”,0,3);
IntegerVariable
y
=
makeIntVar(“y”,0,5);
IntegerVariable
z
=
makeIntVar(“z”,0,4);
model.addConstraint(eq(z,max(x,y)));
67. Where are the really hard problems
The
usual
stuff
int
x
=
3;
int
y
=
5;
int
z
=
Math.max(x,y);
Our
stuff
Model
model
=
new
CPModel();
IntegerVariable
x
=
makeIntVar(“x”,0,3);
IntegerVariable
y
=
makeIntVar(“y”,0,5);
IntegerVariable
z
=
makeIntVar(“z”,0,4);
model.addConstraint(eq(z,max(x,y)));
Breaking
the
mental
model
of
variables
in
a
programming
language
68. Where are the really hard problems
The
usual
stuff
int
x
=
3;
int
y
=
5;
int
z
=
Math.max(x,y);
Our
stuff
Model
model
=
new
CPModel();
IntegerVariable
x
=
makeIntVar(“x”,0,3);
IntegerVariable
y
=
makeIntVar(“y”,0,5);
IntegerVariable
z
=
makeIntVar(“z”,0,4);
model.addConstraint(eq(z,max(x,y)));
• Post
x
>
0
and
propagate
• Post
z
>
0
and
propagate
• Post
z
<
3
and
propagate
69. Where are the really hard problems
Almost
anything
to
do
with
implication
is
mind
blowing
(or
maybe
I
am
a
bad
teacher)