The document discusses the Analytic Hierarchy Process (AHP) decision-making method. It begins with an introduction to the speaker and overview of the tutorial. Then, it outlines the typical steps in the AHP decision-making process, including identifying the decision, gathering information, identifying alternatives, weighting criteria, choosing among alternatives, and reviewing the decision. The remainder of the document provides an in-depth explanation of applying the AHP process through pairwise comparisons and calculating weights and consistency. Examples are provided to illustrate how AHP can be used to evaluate multiple criteria in complex decision problems.
This document discusses exploratory data analysis (EDA) and data visualization techniques in Python. It introduces commonly used Python packages like matplotlib, pandas, and seaborn for EDA and visualization. Specific visualization methods covered include histograms, scatter plots, line plots, bar plots, boxplots, heatmaps, and jointplots. The document also discusses concepts like normal distribution and how to test if a variable is normally distributed. For homework, students are asked to visualize and analyze the winequality-red.csv dataset using various charts.
The document describes the CRISP-DM process, which is a standard process for data mining projects. It consists of 6 phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment. The business understanding phase focuses on understanding project objectives. Data understanding involves exploring and preparing the data. Modeling applies techniques to the data. Evaluation assesses model quality. Deployment puts the results into actual use. Monitoring ongoing models for changes is also important.
Data visualization via Tableau solving an excel problemVivAde1
The document describes solving an Excel problem by creating a Box and Jitters visualization in Tableau. Tissue slides from medical samples were originally analyzed in Excel but managers struggled with the format. The author consulted with project managers, researched chart types, and developed a Box and Jitters solution in Tableau using calculations to randomly scatter data points. This new visualization allowed for easier presentation and application of cut-offs to the medical data. As a result, feedback to pathologists was faster and the dashboard became reusable and dynamic.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines the basic data mining tasks of classification, regression, clustering, summarization and link analysis. It discusses the relationship between data mining and knowledge discovery in databases (KDD) and describes some common issues in data mining such as handling large datasets, high dimensionality, interpretation and visualization of results.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines basic data mining tasks such as classification, clustering, and association rule mining. It also discusses the relationship between data mining and knowledge discovery in databases (KDD), and highlights some issues in data mining such as handling large datasets, high dimensionality, and interpretation of results.
This document discusses issues related to using data and models to make reliable predictions in the physical sciences. It notes that while modern simulations and data collection enable unprecedented detail, making reliable predictions remains challenging. Several key issues are discussed, including model validation, the problem of prediction, and properly accounting for data and uncertainties. The document emphasizes that predictions require extrapolating available information, so models must be based on reliable theory, with uncertainties in embedded models represented probabilistically. An example involving predicting the maximum velocity of a spring-mass-damper system demonstrates representing model uncertainty and checking predictions against additional validation data. The document concludes that making reliable predictions requires models informed by constraints from both data and theory, as well as understanding the limitations of any data
Introduction to Data Mining - A Beginner's Guidegokulprasath06
We live in a world where vast amounts of data are collected daily. Analyzing such data is an important need. Data mining can meet this demand by providing tools to discover knowledge from data.
Checkout: http://bit.ly/2Mub6xP
Any Queries, Call us@ +91 9884412301 / 9600112302
Introduction to Machine Learning - An overview and first step for candidate d...Lucas Jellema
Our technology has gotten smart and fast enough to make predictions and come up with recommendations in near real time. Machine Learning is the art of deriving models from our Big Data collections – harvesting historic patterns and trends – and applying those models to new data in order to rapidly and adequately respond to that data. This presentation will explain and demonstrate in simple, straightforward terms and using easy to understand practical examples what Machine Learning really is and how it can be useful in our world of applications, integrations and databases. Hadoop and Spark, real time and streaming analytics, Watson and Cloud Datalab, Jupyter Notebooks and Citizen Data Scientists will all make their appearance, as will SQL.
This document discusses exploratory data analysis (EDA) and data visualization techniques in Python. It introduces commonly used Python packages like matplotlib, pandas, and seaborn for EDA and visualization. Specific visualization methods covered include histograms, scatter plots, line plots, bar plots, boxplots, heatmaps, and jointplots. The document also discusses concepts like normal distribution and how to test if a variable is normally distributed. For homework, students are asked to visualize and analyze the winequality-red.csv dataset using various charts.
The document describes the CRISP-DM process, which is a standard process for data mining projects. It consists of 6 phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment. The business understanding phase focuses on understanding project objectives. Data understanding involves exploring and preparing the data. Modeling applies techniques to the data. Evaluation assesses model quality. Deployment puts the results into actual use. Monitoring ongoing models for changes is also important.
Data visualization via Tableau solving an excel problemVivAde1
The document describes solving an Excel problem by creating a Box and Jitters visualization in Tableau. Tissue slides from medical samples were originally analyzed in Excel but managers struggled with the format. The author consulted with project managers, researched chart types, and developed a Box and Jitters solution in Tableau using calculations to randomly scatter data points. This new visualization allowed for easier presentation and application of cut-offs to the medical data. As a result, feedback to pathologists was faster and the dashboard became reusable and dynamic.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines the basic data mining tasks of classification, regression, clustering, summarization and link analysis. It discusses the relationship between data mining and knowledge discovery in databases (KDD) and describes some common issues in data mining such as handling large datasets, high dimensionality, interpretation and visualization of results.
This document provides an overview and introduction to the course CIS 674 Introduction to Data Mining. It defines data mining, outlines basic data mining tasks such as classification, clustering, and association rule mining. It also discusses the relationship between data mining and knowledge discovery in databases (KDD), and highlights some issues in data mining such as handling large datasets, high dimensionality, and interpretation of results.
This document discusses issues related to using data and models to make reliable predictions in the physical sciences. It notes that while modern simulations and data collection enable unprecedented detail, making reliable predictions remains challenging. Several key issues are discussed, including model validation, the problem of prediction, and properly accounting for data and uncertainties. The document emphasizes that predictions require extrapolating available information, so models must be based on reliable theory, with uncertainties in embedded models represented probabilistically. An example involving predicting the maximum velocity of a spring-mass-damper system demonstrates representing model uncertainty and checking predictions against additional validation data. The document concludes that making reliable predictions requires models informed by constraints from both data and theory, as well as understanding the limitations of any data
Introduction to Data Mining - A Beginner's Guidegokulprasath06
We live in a world where vast amounts of data are collected daily. Analyzing such data is an important need. Data mining can meet this demand by providing tools to discover knowledge from data.
Checkout: http://bit.ly/2Mub6xP
Any Queries, Call us@ +91 9884412301 / 9600112302
Introduction to Machine Learning - An overview and first step for candidate d...Lucas Jellema
Our technology has gotten smart and fast enough to make predictions and come up with recommendations in near real time. Machine Learning is the art of deriving models from our Big Data collections – harvesting historic patterns and trends – and applying those models to new data in order to rapidly and adequately respond to that data. This presentation will explain and demonstrate in simple, straightforward terms and using easy to understand practical examples what Machine Learning really is and how it can be useful in our world of applications, integrations and databases. Hadoop and Spark, real time and streaming analytics, Watson and Cloud Datalab, Jupyter Notebooks and Citizen Data Scientists will all make their appearance, as will SQL.
This document describes how to use the Analytic Hierarchy Process (AHP) to make a multi-criteria decision about purchasing an inventory management system. It involves defining the goal, criteria, and alternatives in a hierarchy. Pairwise comparisons are made between criteria and alternatives to assign weights. The weighted scores are calculated and the alternative with the highest score is selected. In this example, the goal is to purchase a system, the criteria are cost, functionality, supplier reputation, and user services, and the alternatives are Systems A, B, and C. System A is determined to have the highest total weighted score, making it the best choice.
Agile analytics : An exploratory study of technical complexity managementAgnirudra Sikdar
The thesis involved the reviewing of various case studies to determine the types of modelling, choice of algorithm, types of analytical approaches and trying to determine the various complexities arising from these cases. From these reviews, procedures have been proposed to improve the efficiency and manage the various types of complexities from using agile methodological perspective. Focus was mostly done on Customer Segmentation and Clustering , with the sole purpose to bridge Big Data and Business Intelligence together using Analytic.
[UPDATE] Udacity webinar on Recommendation SystemsAxel de Romblay
A 1h webinar on RecSys for the Udacity NanoDegree Program "How to become a Data Scientist" : https://in.udacity.com/course/data-scientist-nanodegree--nd025.
The link to the ipynb : https://www.kaggle.com/axelderomblay/udacity-workshop-on-recommendation-systems
This document provides an overview of recommendation systems including: how they work using content-based, collaborative filtering, and hybrid approaches; challenges like performance, modelization, and biases; common evaluation methods including offline metrics, A/B testing, and the online-offline gap; interactive learning using reinforcement learning; and concludes with best practices like prioritizing criteria, blending approaches, and continuous evaluation and monitoring. A hands-on tutorial is also proposed to implement a neural network-based recommendation system.
Advanced Optimization for the Enterprise WebinarSigOpt
SigOpt provides an optimization platform and techniques to help practitioners solve machine learning problems more efficiently. It allows users to frame problems as black box optimization and optimize multiple competing metrics. SigOpt's parallel optimization capabilities also help fully utilize available compute resources to accelerate results. Case studies demonstrated significant speedups and accuracy gains from SigOpt compared to random search and grid search.
Benchmarking Automated Machine Learning For Clusteringbiagiolicari7
This document discusses benchmarking four automated machine learning (AutoML) frameworks for clustering: AutoML4Clust, cSmartML, Autocluster, and ML2DAC. It describes the benchmark design, evaluation criteria of clustering quality, scalability, and consistency. The results show that ML2DAC emerged as the top performer based on clustering validity indices and Bayesian analysis, though it was not consistently the best. Room remains for improving AutoML frameworks' performance and transparency for clustering tasks.
This document discusses how to perform a gap analysis for containers practices. It outlines the steps to identify the topic, determine relevant practices, survey stakeholders to collect current practice levels and importance scores, perform a gap assessment to identify differences between current and best practices, validate the results in a workshop, and conduct a gap analysis to prioritize areas for improvement. An example is provided where containers is the topic, practices are identified from nine categories, stakeholders are surveyed, and gap scores are calculated and analyzed to determine which practice areas and individual practices should be the focus of an improvement strategy.
Argumentation in Artificial Intelligence: From Theory to Practice (Practice)Mauro Vallati
Part on Practice of the IJCAI 2017 Tutorial titled "Argumentation in Artificial Intelligence: From Theory to Practice", from Federico Cerutti and Mauro Vallati
Machine learning lets you make better business decisions by uncovering patterns in your consumer behavior data that is hard for the human eye to spot. You can also use it to automate routine, expensive human tasks that were previously not doable by computers. In the business to business space (B2B), if your competitors can make wiser business decisions based on data and automate more business operations but you still base your decisions on guesswork and lack automation, you will lose out on business productivity. In this introduction to machine learning tech talk, you will learn how to use machine learning even if you do not have deep technical expertise on this technology.
Topics covered:
1.What is machine learning
2.What is a typical ML application architecture
3.How to start ML development with free resource links
4.Key decision factors in ML technology selection depending on use case scenarios
The document summarizes a data science project on bank marketing data using various tools in IBM Watson Studio. The project followed a standard methodology of data exploration, feature engineering, model selection, training and evaluation. Random forest, XGBoost, LightGBM and deep learning models were tested. LightGBM performed best with a 95.1% ROC AUC score from AutoAI hyperparameter tuning. The best model was deployed to IBM Watson Machine Learning for production use. Overall, the project demonstrated the effectiveness of the Watson Studio platform and tools in developing performant models from structured data.
This document discusses the key considerations and challenges for productionizing recommender systems. It outlines the full lifecycle from scoping a recommender system project through deployment and continuous monitoring. Some of the main points covered include: defining requirements and key metrics; preparing data through feature engineering, cleaning and transformation; selecting and evaluating recommendation models both offline and through A/B testing; ensuring deployments are robust, scalable and address technical debt; and continuously monitoring systems for data or algorithmic drift once in production.
DataEngConf SF16 - Three lessons learned from building a production machine l...Hakka Labs
This document discusses three lessons learned from building machine learning systems at Stripe.
1. Don't treat models as black boxes. Early on, Stripe focused only on training with more data and features without understanding algorithms, results, or deeper reasons behind results. This led to overfitting. Introspecting models using "score reasons" helped debug issues.
2. Have a plan for counterfactual evaluation before production. Stripe's validation results did not predict poor production performance because the environment changed. Counterfactual evaluation using A/B testing with probabilistic reversals of block decisions allows estimating true precision and recall.
3. Invest in production monitoring of models. Monitoring inputs, outputs, action rates, score
Idss for evaluating & selecting is project hepu deng santosoAnita Carollin
The document describes an intelligent decision support system (IDSS) for evaluating and selecting information systems projects. The IDSS uses a multi-criteria analysis (MA) approach to help decision makers select the most appropriate project. It consists of three subsystems - a dialogue subsystem, input management subsystem, and knowledge management subsystem. The IDSS guides decision makers through six phases using six different MA methods (SAW, TOPSIS, ELECTRE, AHP, fuzzy method, fuzzy MA method) depending on the problem characteristics. Example rules and an implementation for selecting a supply chain management project are provided to demonstrate how the IDSS works.
Best Practices for Hyperparameter Tuning with MLflowDatabricks
Hyperparameter tuning and optimization is a powerful tool in the area of AutoML, for both traditional statistical learning models as well as for deep learning. There are many existing tools to help drive this process, including both blackbox and whitebox tuning. In this talk, we'll start with a brief survey of the most popular techniques for hyperparameter tuning (e.g., grid search, random search, Bayesian optimization, and parzen estimators) and then discuss the open source tools which implement each of these techniques. Finally, we will discuss how we can leverage MLflow with these tools and techniques to analyze how our search is performing and to productionize the best models.
Speaker: Joseph Bradley
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Six Sigma is a data-driven methodology for process improvement originally developed by Motorola. It involves defining a project goal, measuring key aspects of the current process, analyzing data to determine root causes of defects, improving the process by addressing causes, and controlling future process variation. The document provides an overview of Six Sigma and its development, then gives an example project summary involving improving calcium levels in a product. The project uses Six Sigma tools like process mapping, measurement systems analysis, data analysis, design of experiments, and risk analysis to select and validate factors influencing calcium and develop improvements.
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
This document describes how to use the Analytic Hierarchy Process (AHP) to make a multi-criteria decision about purchasing an inventory management system. It involves defining the goal, criteria, and alternatives in a hierarchy. Pairwise comparisons are made between criteria and alternatives to assign weights. The weighted scores are calculated and the alternative with the highest score is selected. In this example, the goal is to purchase a system, the criteria are cost, functionality, supplier reputation, and user services, and the alternatives are Systems A, B, and C. System A is determined to have the highest total weighted score, making it the best choice.
Agile analytics : An exploratory study of technical complexity managementAgnirudra Sikdar
The thesis involved the reviewing of various case studies to determine the types of modelling, choice of algorithm, types of analytical approaches and trying to determine the various complexities arising from these cases. From these reviews, procedures have been proposed to improve the efficiency and manage the various types of complexities from using agile methodological perspective. Focus was mostly done on Customer Segmentation and Clustering , with the sole purpose to bridge Big Data and Business Intelligence together using Analytic.
[UPDATE] Udacity webinar on Recommendation SystemsAxel de Romblay
A 1h webinar on RecSys for the Udacity NanoDegree Program "How to become a Data Scientist" : https://in.udacity.com/course/data-scientist-nanodegree--nd025.
The link to the ipynb : https://www.kaggle.com/axelderomblay/udacity-workshop-on-recommendation-systems
This document provides an overview of recommendation systems including: how they work using content-based, collaborative filtering, and hybrid approaches; challenges like performance, modelization, and biases; common evaluation methods including offline metrics, A/B testing, and the online-offline gap; interactive learning using reinforcement learning; and concludes with best practices like prioritizing criteria, blending approaches, and continuous evaluation and monitoring. A hands-on tutorial is also proposed to implement a neural network-based recommendation system.
Advanced Optimization for the Enterprise WebinarSigOpt
SigOpt provides an optimization platform and techniques to help practitioners solve machine learning problems more efficiently. It allows users to frame problems as black box optimization and optimize multiple competing metrics. SigOpt's parallel optimization capabilities also help fully utilize available compute resources to accelerate results. Case studies demonstrated significant speedups and accuracy gains from SigOpt compared to random search and grid search.
Benchmarking Automated Machine Learning For Clusteringbiagiolicari7
This document discusses benchmarking four automated machine learning (AutoML) frameworks for clustering: AutoML4Clust, cSmartML, Autocluster, and ML2DAC. It describes the benchmark design, evaluation criteria of clustering quality, scalability, and consistency. The results show that ML2DAC emerged as the top performer based on clustering validity indices and Bayesian analysis, though it was not consistently the best. Room remains for improving AutoML frameworks' performance and transparency for clustering tasks.
This document discusses how to perform a gap analysis for containers practices. It outlines the steps to identify the topic, determine relevant practices, survey stakeholders to collect current practice levels and importance scores, perform a gap assessment to identify differences between current and best practices, validate the results in a workshop, and conduct a gap analysis to prioritize areas for improvement. An example is provided where containers is the topic, practices are identified from nine categories, stakeholders are surveyed, and gap scores are calculated and analyzed to determine which practice areas and individual practices should be the focus of an improvement strategy.
Argumentation in Artificial Intelligence: From Theory to Practice (Practice)Mauro Vallati
Part on Practice of the IJCAI 2017 Tutorial titled "Argumentation in Artificial Intelligence: From Theory to Practice", from Federico Cerutti and Mauro Vallati
Machine learning lets you make better business decisions by uncovering patterns in your consumer behavior data that is hard for the human eye to spot. You can also use it to automate routine, expensive human tasks that were previously not doable by computers. In the business to business space (B2B), if your competitors can make wiser business decisions based on data and automate more business operations but you still base your decisions on guesswork and lack automation, you will lose out on business productivity. In this introduction to machine learning tech talk, you will learn how to use machine learning even if you do not have deep technical expertise on this technology.
Topics covered:
1.What is machine learning
2.What is a typical ML application architecture
3.How to start ML development with free resource links
4.Key decision factors in ML technology selection depending on use case scenarios
The document summarizes a data science project on bank marketing data using various tools in IBM Watson Studio. The project followed a standard methodology of data exploration, feature engineering, model selection, training and evaluation. Random forest, XGBoost, LightGBM and deep learning models were tested. LightGBM performed best with a 95.1% ROC AUC score from AutoAI hyperparameter tuning. The best model was deployed to IBM Watson Machine Learning for production use. Overall, the project demonstrated the effectiveness of the Watson Studio platform and tools in developing performant models from structured data.
This document discusses the key considerations and challenges for productionizing recommender systems. It outlines the full lifecycle from scoping a recommender system project through deployment and continuous monitoring. Some of the main points covered include: defining requirements and key metrics; preparing data through feature engineering, cleaning and transformation; selecting and evaluating recommendation models both offline and through A/B testing; ensuring deployments are robust, scalable and address technical debt; and continuously monitoring systems for data or algorithmic drift once in production.
DataEngConf SF16 - Three lessons learned from building a production machine l...Hakka Labs
This document discusses three lessons learned from building machine learning systems at Stripe.
1. Don't treat models as black boxes. Early on, Stripe focused only on training with more data and features without understanding algorithms, results, or deeper reasons behind results. This led to overfitting. Introspecting models using "score reasons" helped debug issues.
2. Have a plan for counterfactual evaluation before production. Stripe's validation results did not predict poor production performance because the environment changed. Counterfactual evaluation using A/B testing with probabilistic reversals of block decisions allows estimating true precision and recall.
3. Invest in production monitoring of models. Monitoring inputs, outputs, action rates, score
Idss for evaluating & selecting is project hepu deng santosoAnita Carollin
The document describes an intelligent decision support system (IDSS) for evaluating and selecting information systems projects. The IDSS uses a multi-criteria analysis (MA) approach to help decision makers select the most appropriate project. It consists of three subsystems - a dialogue subsystem, input management subsystem, and knowledge management subsystem. The IDSS guides decision makers through six phases using six different MA methods (SAW, TOPSIS, ELECTRE, AHP, fuzzy method, fuzzy MA method) depending on the problem characteristics. Example rules and an implementation for selecting a supply chain management project are provided to demonstrate how the IDSS works.
Best Practices for Hyperparameter Tuning with MLflowDatabricks
Hyperparameter tuning and optimization is a powerful tool in the area of AutoML, for both traditional statistical learning models as well as for deep learning. There are many existing tools to help drive this process, including both blackbox and whitebox tuning. In this talk, we'll start with a brief survey of the most popular techniques for hyperparameter tuning (e.g., grid search, random search, Bayesian optimization, and parzen estimators) and then discuss the open source tools which implement each of these techniques. Finally, we will discuss how we can leverage MLflow with these tools and techniques to analyze how our search is performing and to productionize the best models.
Speaker: Joseph Bradley
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Six Sigma is a data-driven methodology for process improvement originally developed by Motorola. It involves defining a project goal, measuring key aspects of the current process, analyzing data to determine root causes of defects, improving the process by addressing causes, and controlling future process variation. The document provides an overview of Six Sigma and its development, then gives an example project summary involving improving calcium levels in a product. The project uses Six Sigma tools like process mapping, measurement systems analysis, data analysis, design of experiments, and risk analysis to select and validate factors influencing calcium and develop improvements.
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
2023 Supervised Learning for Orange3 from scratchFEG
This document provides an overview of supervised learning and decision tree models. It discusses supervised learning techniques for classification and regression. Decision trees are explained as a method that uses conditional statements to classify examples based on their features. The document reviews node splitting criteria like information gain that help determine the most important features. It also discusses evaluating models for overfitting/underfitting and techniques like bagging and boosting in random forests to improve performance. Homework involves building a classification model on a healthcare dataset and reporting the results.
This document provides an overview of unsupervised learning techniques including k-means clustering and association rule mining. It begins with introductions to the speaker and tutorial topics. It then contrasts supervised vs unsupervised learning, describing how k-means is used for clustering without labels and how association rules can discover relationships between items. The document provides examples of applying these techniques in domains like retail, sports, email marketing and healthcare. It also includes visualizations and discusses important concepts for k-means like data transformation and for association rules like support, confidence and lift. Homework questions are asked about preparing data for these algorithms in Orange.
202312 Exploration Data Analysis Visualization (English version)FEG
This document provides an overview of exploratory data analysis (EDA) and visualization techniques that can be performed before building a machine learning model. It introduces the Iris dataset as an example and outlines the key steps of EDA, including loading the data, examining correlations, creating scatter plots, and generating distribution and box plots to understand feature statistics. As homework, students are asked to explore another dataset with a numeric target feature called "housing.tab" and explain the visualizations.
202312 Exploration of Data Analysis VisualizationFEG
This document provides a tutorial on data visualization and analysis using Orange 3. It discusses different types of charts like pie charts, line charts, histograms, bar charts, scatter plots, box plots, and pivot tables. It demonstrates how to visualize survival rates from the Titanic dataset based on features like sex, passenger class, age, and fare paid. Key findings are that women and higher class passengers had higher survival rates, and survival rates also depended on combinations of these features.
Transfer learning (TL) is a research problem in machine learning (ML) that focuses on applying knowledge gained while solving one task to a related task
This document provides a summary of image classification using deep learning techniques. It begins with an introduction to the speaker and their background. It then discusses the main types of image AI tasks like classification, detection, and segmentation. The document reviews the history and timeline of deep learning, important datasets like ImageNet, and algorithms such as convolutional neural networks. It presents the typical process flow for image-based deep learning including feature extraction using convolutional and pooling layers, classification layers, and different network architectures. The document concludes by discussing a homework assignment on building a multi-class image classification model using a dataset of dog, cat, and bird images.
This document provides an introduction and tutorial on using Google Colab. It discusses the speaker's background and experience, then demonstrates how to run sample Python codes in a Colab notebook. It shows how to open an existing Colab file, access computing resources on Colab including GPUs and TPUs, create a new Colab file, and interact with a Google Drive folder to access and save files. The document concludes by providing a homework assignment to have students run Python code in Colab and interact with their Google Drive.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
2. About me
• Education
• NCU (MIS)、NCCU (CS)
• Work Experience
• Telecom big data Innovation
• AI projects
• Retail marketing technology
• User Group
• TW Spark User Group
• TW Hadoop User Group
• Taiwan Data Engineer Association Director
• Research
• Big Data/ ML/ AIOT/ AI Columnist
2
「How can you not get romantic about baseball ? 」
6. Steps of the Decision-Making Process
• 1. Identify the decision
• 2. Gather relevant information
• 3. Identify the alternatives
• 4. Weight the evidences
• 5. Choose among the alternatives
• 6. Take action
• 7. Review your decision
Quiz: What is the goals defining process and why is it so important?
6
7. IDENTIFY the decision
• To make a decision, you must first identify the problem you need to
solve or the question you need to answer.
• If you need to achieve a specific goal from your decision, make it
measurable and timely so you know for certain that you met the goal
at the end of the process.
7
8. Gather relevant information
• Do an INTERNAL assessment, seeing
where your organization has succeeded
and failed in areas related to your decision.
• Seek information from EXTERNAL sources,
including studies, market research, and, in
some cases, evaluation from paid
consultants.
• You may easily become bogged down by
too much information—facts and statistics
that seem applicable to your situation
might only complicate the process.
8
9. IDENTIFY the alternatives
• There is usually more than one option to consider when trying to
MEET A GOAL.
• For example, if your company is trying to gain more engagement on
social media, your alternatives could include paid social
advertisements, a change in your organic social media strategy, or a
combination of the two.
9
10. WEIGHT the evidences
• What companies have done
in the past to succeed in
these areas, and take a good
hard look at your own
organization’s wins and
losses.
• Identify potential PITFALLS
for each of your alternatives,
and weigh those against the
possible rewards.
10
11. CHOOSE among the alternatives
• You’ve identified and clarified what decision needs to be made.
• Gathered all relevant information, and developed and considered the
POTENTIAL PATHS to take.
• You are perfectly prepared to CHOOSE.
• Considering to AHP, Delphi method, Factor analysis, Literature Review,
Questionnaire.
11
https://blog.mesydel.com/what-is-the-delphi-method-and-
what-is-it-used-for-feb2d26f917a
12. TAKE action
• Once you’ve made your decision, act on it! Develop a plan to make
your decision tangible and achievable.
• Following to PMBOK, AGILE SCRUM MASTER.
https://www.agilealliance.org/
12
13. REVIEW your decision
• Take an honest look back at your
decision.
• Did you solve the problem?
• Did you answer the question?
• Did you meet your goals?
• If so, take NOTE of what worked
for future reference
• if not, learn from your MISTAKES
as you begin the decision and
making process again.
Solve the
question
Learn the
knowledge
G
o
o
d
13
15. Analytic Hierarchy Process Introduction
• Thomas L. Saaty was a distinguished University
Professor at the University of Pittsburgh.He
has made contributions in the fields of
operations research.
• He is the inventor, architect, and primary
theoretician of the Analytic Hierarchy Process
(AHP), a decision-making framework used for
large-scale, multi-party, multi-criteria decision
analysis.
http://www.rafikulislam.com/uploads/resourses/1972455
12559a37aadea6d.pdf
15
16. Analytic Hierarchy Process Introduction
• Procedure of AHP
• Model problem as a HIERACHY
DECOMPOSE the problem into from of
hierarchy.
• Evaluate hierarchy: PAIRWISE comparison
from point of view of importance to the
problem solution.
• Compute priorities: ESTABLISH weight
system.
Problem
Criteria A Criteria B Criteria C
Element 1
Element 2
…
Element 1
Element 2
…
Element 1
Element 2
…
Alternative 1 Alternative 2 Alternative 3
16
Homogeneous criteria are placed on the same level and are
independent of each other
Elements above is the criterion, and pairwise the importance of
the elements.
Literature Review, Questionnaire, Factor
analysis, Delphi
17. Analytic Hierarchy Process Introduction
• A multicriteria decision making
Price or Cost Storage Space Camera Quality Looks
Mobile1 $ 250 16 GB 12 MP 5
Mobile2 $ 200 16 GB 8 MP 3
Mobile3 $ 300 32 GB 16 MP 4
Mobile4 $ 270 32 GB 8 MP 4
Mobile5 $ 225 16 GB 16 MP 2
Alternative
Criteria
17
18. Analytic Hierarchy Process Introduction
• Step1 in AHP
• Developing a hierarchical structure with a GOAL at the top level, the criteria
at the second level and alternatives at the third level.
Buying a Mobile
Price or Cost Storage Space Camera Quality
Mobile 1 Mobile 2 Mobile 3 Mobile 4 Mobile 5
Level 1 (goal)
Level 2
(independent
criteria)
Level 3
Looks
(Thomas Saaty: no more than 7 criteria)
TOP
to
bottom
approach
AHP doesn’t consider the relation between criteria
18
19. Analytic Hierarchy Process Introduction
• Step2 in AHP
• Determine the relative importance of each criteria respect to the GOAL.
• Scale of relative importance.
• Pairwise comparison.
Scale of importance Description
1 Equal importance
3 Moderate importance
5 Strong importance
7 Very strong importance
9 Extreme importance
2, 4, 6, 8 Intermediate values
1/3, 1/5, 1/7, 1/9 Value for inverse comparison
19
20. Analytic Hierarchy Process Introduction
• Step3 in AHP
• Fill-in the data in the matrix.
• How importance is to 「Price or Cost 」with respect to 「 Storage Space」?
• Ex: Price or Cost is of a STRONG importance than Storage Space.
• Storage Space: x Value
• Price or Cost: 5x Value
Price of Cost Storage Space Camera Quality Looks
Price or Cost 1 5x/x = 5 4 7
Storage Space x/5x = 1/5 1 1/2 3
Camera Quality 1/4 2 1 3
Looks 1/7 1/3 1/3 1
Pairwise comparison of matrix
Row element
Column element
20
Reciprocal
21. Analytic Hierarchy Process Introduction
• More than one questionnaire inputs from experts, how to calculate
the value to fill-in the matrix?
• Use GEOMETRIC mean instead of arithmetic mean
Expert A: 1⁄3
Expert B: 3
Example of two experts’
inputs to the same criteria
21
Geometric:
Arithmetic:
22. Analytic Hierarchy Process Introduction
• Step4 in AHP
• Sum of columns and normalize them. (In ML we called feature scaling)
Price of Cost Storage Space Camera Quality Looks
Price or Cost 1/(1.59)=0.6289 5/(8.33)=0.6002 4/(5.83)=0.6861 7/14=0.5
Storage Space 1/5/(1.59)=0.1258 1/(8.33)=0.12 1/2/(5.83)=0.0858 3/14=0.2143
Camera Quality 1/4/(1.59)=0.1572 2/(8.33)=0.2401 1/(5.83)=0.1715 3/14=0.2143
Looks 1/7/(1.59)=0.0898 1/3/(8.33)=0.04 1/3/(5.83)=0.0572 1/14=0.0714
Sum 1.59 8.33 5.83 14
22
1.59 = 1+(1/5)+(1/4)+(1/7)
Normalization: Data will be rescaled so that the data will fall in the range of [0,1]
Standardization: Also called Z-score normalization, they’ll have the properties of a
standard normal distribution with mean(μ=0) and standard deviation(σ=1). This
scales the features in a way that they range between [-1,1]
23. Analytic Hierarchy Process Introduction
• Step5 in AHP
• Calculating Criteria weights in a new column
Price of Cost Storage Space Camera Quality Looks Criteria Weights
Price or Cost 0.6289 0.6002 0.6861 0.5 0.6038
Storage Space 0.1258 0.12 0.0858 0.2143 0.1365
Camera Quality 0.1572 0.2401 0.1715 0.2143 0.1958
Looks 0.0898 0.04 0.0572 0.0714 0.0646
0.6289+0.6002+0.6861+0.5
4 = 0.6038
23
Eigenvector
eig.ipynb
24. Analytic Hierarchy Process Introduction
• Step6 in AHP
• Calculating the consistency.
Criteria weights 0.6038 0.1365 0.1957 0.0646
Weighted
Sum Value
Price or Cost 1*0.6038=0.6038 5*0.1365=0.6825 4*0.1957=0.7832 7*0.0646=0.4522 2.5217
Storage Space 0.2*0.6038=0.1208 1*0.1365=0.1365 0.5*0.1957=0.0979 3*0.0646=0.1938 0.549
Camera Quality 0.25*0.6038=0.151 2*0.1365=0.273 1*0.1957=0.1958 3*0.0646=0.1938 0.8136
Looks 0.140.6038=0.0863 0.33*0.1365=0.045 0.33*0.1957=0.0646 1*0.0646=0.0646 0.2616
0.6038+0.6825+0.7832+0.4522
4
= 2.5217
A * Eigenvector
24
25. Analytic Hierarchy Process Introduction
• Step7 in AHP
• Weighted Sum Value & Criteria weights
Price of Cost Storage Space Camera Quality Looks
Weighted
Sum Value
Criteria
weights
Average λ
Price or Cost 0.6038 0.6825 0.7832 0.4522 2.5217 0.6038 2.5217/0.6038
=4.1762
Storage Space 0.1208 0.1365 0.0979 0.1938 0.549 0.1365 0.549/0.1365
=4.0225
Camera Quality 0.151 0.273 0.1958 0.1938 0.8136 0.1958 0.8136/0.1958
=4.1553
Looks 0.0863 0.045 0.0646 0.0646 0.2616 0.0646 0.2616/0.0646
=4.0488
λmax
4.1762+4.0225+4.1553+4.0488
4
= = 4.1007, Consistency Index (C.I.) =
λ max – n
n-1
4.1007 - 4
4-1
= = 0.03358
A * Eigenvector = λ * Eigenvector
A* Eigenvector =
(eigenvalue) 25
26. Analytic Hierarchy Process Introduction
• Step8 in AHP
• Calculating consistency ratio. What is consistency ratio?
• Consistency Ratio = (C.I.)/(Random Index)
• Threshold of 0.1 is a rule of thumb, 0 is the ideally
• If C.I. < 0.1, we can assume our metrics is reasonably
consistent, we may continue with the process of
decision making using AHP based on
criteria weights.
• If C.I. > 0.1, what can we do?
= 0.03358
0.9 = 0.037311
https://bpmsg.com/ahp-consistency-ratio-cr/
26
27. Analytic Hierarchy Process Introduction
• What is consistency ratio?
1, 2, 5
1/2, 1, 3
1/5, 1/3, 1
Consistency: to is 1 : 2 (A is important)
to is 1 : 5 (A is important)
to should be 2:5 = 2.5 (B is important)
A
A B C
B
C
A B
A C
B C
In principle, consistency ratio can’t vary to much,
the consistency is the way to evaluate
27
How to handling high consistency ratio?
https://bpmsg.com/ahp-high-consistency-ratio/
28. Analytic Hierarchy Process Introduction
• We found the importance is below
Attribute or Criteria Criteria weights Rank of importance (In priority)
Price or Cost 0.6038 1
Storage Space 0.1365 3
Camera Quality 0.1958 2
Looks 0.0646 4
ahp_practice.ipynb
28
29. Analytic Hierarchy Process Introduction
• Some more complex hierarchies.
Homework 1: Choosing an automobile
https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_car_example 29
33. Analytic Hierarchy Process Introduction
• Step4: Make a table and choose (budget is 25,000)
33
Alternatives
31090-20360=10730 31090/20360=1.53
25000-20360=4640
34. Analytic Hierarchy Process Introduction
• Step5: Make a table and choose (Derive preference based on
evaluation)
34
How can we get those information ?
Ans: Literature Review, Questionnaire, crawler social media…
The source helps us to fill-in the value of matrix
35. Analytic Hierarchy Process Introduction
• Step6: Make a table and choose
• Derive preference based on evaluation
35
How to evaluate the factor 「Passengers」
and 「 Capacity 」 relation?
36. Analytic Hierarchy Process Introduction
• Step7: Purchase price
• Step8: Fuel costs
Get Eigenvectors
Get Eigenvectors
36
37. Analytic Hierarchy Process Introduction
• Step9: Maintenance cost
• Step10: Resale value
Get Eigenvectors
Get Eigenvectors
37
38. Analytic Hierarchy Process Introduction
• Step11: Safety
• Step12: Style
Get Eigenvectors
Get Eigenvectors
38
39. Analytic Hierarchy Process Introduction
• Step13: Cargo Capacity
• Step14: Passenger Capacity
Get Eigenvectors
Get Eigenvectors
39
40. Analytic Hierarchy Process Introduction
• Each alternative has a priority corresponding to its fitness to all
the judgments table about all those aspects of Cost, Safety,
Style and Capacity.
• Here is a summary of the global priorities of the alternatives.
40
Judgments table
V
42. AHP application
• While the math can be done by hand or with
a calculator, it is far more common to use
one of several computerized methods for
entering and synthesizing the judgments.
• The simplest of these involve standard
spreadsheet software, while the most
complex use custom software, often
augmented by special devices for acquiring
the judgments of decision makers gathered
in a meeting room.
A typical device for entering
judgments in an AHP group decision
making session.
42
43. AHP application
• Grouping user by their behavior using AHP
• Assume we collected customer behavior data for a few days which is from
their internet behavior log to our online-shop.
• The goal is to fine out the preference of our customers, significant or
insignificant (we use more than one method to achieve it)
https://www.slideshare.net/orozcohsu/customer-behavior-analysis-240565709
43
45. Homework
• Homework 1: Study of complex hierarchy of choosing an automobile
• https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_car_
example
• Homework 2: What is eigenvalues & eigenvectors
• https://medium.com/sho-jp/linear-algebra-part-6-eigenvalues-and-
eigenvectors-35365dc4365a
• Homework 3: Study of Analytic Network Process, ANP)
• https://en.wikipedia.org/wiki/Analytic_network_process
45