The document discusses how search engines can sell advertising tied to search queries through an auction-based system. It describes how search queries express a user's intent, allowing targeted advertising. The document proposes using a Vickrey-Clarke-Groves auction to encourage truthful bidding by advertisers by having the winner pay an amount equal to the harm caused to other bidders. This system maximizes total advertiser valuations and incentives truthful reporting of private valuations.
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...Ashray Bhandare
In this thesis, three bio-inspired algorithms viz. genetic algorithm, particle swarm optimizer (PSO) and grey wolf optimizer (GWO) are used to optimally determine the architecture of a convolutional neural network (CNN) that is used to classify handwritten numbers. The CNN is a class of deep feed-forward network, which have seen major success in the field of visual image analysis. During training, a good CNN architecture is capable of extracting complex features from the given training data; however, at present, there is no standard way to determine the architecture of a CNN. Domain knowledge and human expertise are required in order to design a CNN architecture. Typically architectures are created by experimenting and modifying a few existing networks.
The bio-inspired algorithms determine the exact architecture of a CNN by evolving the various hyperparameters of the architecture for a given application. The proposed method was tested on the MNIST dataset, which is a large database of handwritten digits that is commonly used in many machine-learning models. The experiment was carried out on an Amazon Web Services (AWS) GPU instance, which helped to speed up the experiment time. The performance of all three algorithms was comparatively studied. The results show that the bio-inspired algorithms are capable of generating successful CNN architectures. The proposed method performs the entire process of architecture generation without any human intervention.
Fuzzy soft set approach in decision making plays a crucial role by using Dempster–Shafer theory of evidence. First, the uncertain degrees of several parameters are obtained via grey relational analysis that apply to calculate the grey mean relational degree. Secondly, a mass functions of different independent choices with several parameters have given according to the uncertain degree. Lastly, aggregate the choices into a collective choices, Dempster’s rule of evidence combination have been utilized. The aforesaid soft computing based method have been applied on decision making problem.
Introduction of Quantum Annealing and D-Wave MachinesArithmer Inc.
This slide was used for Arithmer seminar in April 2021, by Dr. Yuki Bando.
It is for introduction of quantum computer, D-wave series, and its application to optimization problems in industry.
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
This document provides an overview of the Situation Calculus, a formal logic framework for representing states, actions, and how actions transform states. It describes key components of the Situation Calculus including: (1) representing states as constants and using predicates to describe state properties, (2) representing actions and how they change state properties using effect axioms, (3) using frame axioms to represent properties that don't change with actions, and (4) generating plans by proving the existence of goal states and extracting the actions. Challenges with the approach include dealing with ramifications of actions and specifying all relevant preconditions and qualifications.
Collaborative filtering is a technique used by recommender systems to predict items users may like based on opinions of similar users. K-nearest neighbors (KNN) is a collaborative filtering algorithm that finds the k most similar users and bases predictions on the ratings of those neighbors. The document describes KNN collaborative filtering, including finding neighbor similarity, making predictions, and evaluating error rates on a movie recommendation system using the MovieLens dataset.
the fuzzy logic is very important for any automated or intelligent system.we are training the system with the help of fuzzy logic and fuzzy system so that it can behave and think like human beings. so, in this slide fuzzy inference system has been explained with some numerial problem.
This slide deck provides a survey of research in the field of coreference resolution. We survey 10 significant research papers and also provide a detailed description of the problem and also suggest future research directions.
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...Ashray Bhandare
In this thesis, three bio-inspired algorithms viz. genetic algorithm, particle swarm optimizer (PSO) and grey wolf optimizer (GWO) are used to optimally determine the architecture of a convolutional neural network (CNN) that is used to classify handwritten numbers. The CNN is a class of deep feed-forward network, which have seen major success in the field of visual image analysis. During training, a good CNN architecture is capable of extracting complex features from the given training data; however, at present, there is no standard way to determine the architecture of a CNN. Domain knowledge and human expertise are required in order to design a CNN architecture. Typically architectures are created by experimenting and modifying a few existing networks.
The bio-inspired algorithms determine the exact architecture of a CNN by evolving the various hyperparameters of the architecture for a given application. The proposed method was tested on the MNIST dataset, which is a large database of handwritten digits that is commonly used in many machine-learning models. The experiment was carried out on an Amazon Web Services (AWS) GPU instance, which helped to speed up the experiment time. The performance of all three algorithms was comparatively studied. The results show that the bio-inspired algorithms are capable of generating successful CNN architectures. The proposed method performs the entire process of architecture generation without any human intervention.
Fuzzy soft set approach in decision making plays a crucial role by using Dempster–Shafer theory of evidence. First, the uncertain degrees of several parameters are obtained via grey relational analysis that apply to calculate the grey mean relational degree. Secondly, a mass functions of different independent choices with several parameters have given according to the uncertain degree. Lastly, aggregate the choices into a collective choices, Dempster’s rule of evidence combination have been utilized. The aforesaid soft computing based method have been applied on decision making problem.
Introduction of Quantum Annealing and D-Wave MachinesArithmer Inc.
This slide was used for Arithmer seminar in April 2021, by Dr. Yuki Bando.
It is for introduction of quantum computer, D-wave series, and its application to optimization problems in industry.
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
This document provides an overview of the Situation Calculus, a formal logic framework for representing states, actions, and how actions transform states. It describes key components of the Situation Calculus including: (1) representing states as constants and using predicates to describe state properties, (2) representing actions and how they change state properties using effect axioms, (3) using frame axioms to represent properties that don't change with actions, and (4) generating plans by proving the existence of goal states and extracting the actions. Challenges with the approach include dealing with ramifications of actions and specifying all relevant preconditions and qualifications.
Collaborative filtering is a technique used by recommender systems to predict items users may like based on opinions of similar users. K-nearest neighbors (KNN) is a collaborative filtering algorithm that finds the k most similar users and bases predictions on the ratings of those neighbors. The document describes KNN collaborative filtering, including finding neighbor similarity, making predictions, and evaluating error rates on a movie recommendation system using the MovieLens dataset.
the fuzzy logic is very important for any automated or intelligent system.we are training the system with the help of fuzzy logic and fuzzy system so that it can behave and think like human beings. so, in this slide fuzzy inference system has been explained with some numerial problem.
This slide deck provides a survey of research in the field of coreference resolution. We survey 10 significant research papers and also provide a detailed description of the problem and also suggest future research directions.
The document discusses scheduling algorithms for hardware synthesis. It describes different types of scheduling problems including unconstrained scheduling, scheduling with timing constraints, and scheduling with resource constraints. It provides examples of algorithms for each type, such as ASAP for unconstrained scheduling, and Bellman-Ford and Liao-Wong algorithms for scheduling under detailed timing constraints. The goal of scheduling is to assign start times to operations under given constraints to optimize area and latency.
I. FSSP(Progression Planner) II. BSSP(Regression Plannervikas dhakane
The document discusses forward state space planning (FSSP) and backward state space planning (BSSP). FSSP, also called progression planning, starts from the initial state and plans a sequence of actions to reach the goal state. BSSP, also called regression planning, starts from the goal state and works backward to determine which actions could have led to that state. The document provides an example of using both FSSP and BSSP to forecast weather based on given rules and facts about temperature, humidity, clouds, etc.
This document provides an overview of logical agents and knowledge representation. It discusses knowledge-based agents and the Wumpus world example. It introduces propositional logic and various inference techniques like forward chaining, backward chaining, and resolution that can be used for automated reasoning. It also discusses concepts like entailment, models, validity, and satisfiability. Finally, it discusses how these logical concepts can be applied to build an agent that reasons about the Wumpus world using propositional logic.
The document provides an overview of recommender systems. It discusses the typical architecture of recommender systems and describes three main types: collaborative filtering systems, content-based systems, and knowledge-based systems. It also covers paradigms like collaborative filtering, content-based, knowledge-based, and hybrid recommender systems. The document then focuses on collaborative filtering techniques like user-based nearest neighbor collaborative filtering and item-based collaborative filtering. It also discusses latent factor models, matrix factorization approaches, and context-based recommender systems.
This document discusses neural graph collaborative filtering. It begins with an overview of recommender systems and collaborative filtering. It then discusses latent factor models, including matrix factorization and neural collaborative filtering (NCF). A key development was neural graph collaborative filtering (NGCF), which uses graph neural networks to model high-order connectivity in the user-item graph to address sparsity issues. NGCF aims to learn better embeddings by propagating user and item information through the graph.
The document discusses local search algorithms, including gradient descent, the Metropolis algorithm, simulated annealing, and Hopfield neural networks. It provides details on how each algorithm works, such as gradient descent taking steps proportional to the negative gradient of a function to find a local minimum. The algorithms are compared, with some having similarities in their methods, like maximum cut problem and Hopfield neural networks using state flipping algorithms, and Metropolis and gradient descent using simulated annealing. Advantages and disadvantages of local search algorithms are presented.
Paper review: Measuring the Intrinsic Dimension of Objective Landscapes.Wuhyun Rico Shin
This document summarizes a paper that proposes measuring the intrinsic dimension (dint) of neural networks as a way to understand how many parameters are truly needed. It trains networks in randomly oriented lower-dimensional subspaces and finds dint as the dimension where solutions first appear. The paper applies this to CNNs and FCs on MNIST, finding dint is much smaller than parameters and robust across models. It also uses dint to compare task difficulties, finding reinforcement learning environments easier than image classification.
This document provides an overview of different agent architectures, including reactive, deliberative, and hybrid architectures. It discusses key concepts like the types of environments agents can operate in, including accessible vs inaccessible, deterministic vs non-deterministic, episodic vs non-episodic, and static vs dynamic environments. Reactive architectures are focused on fast reactions to environmental changes with minimal internal representation and computation. Deliberative architectures emphasize long-term planning and goal-driven behavior using symbolic representations. Rodney Brooks proposed that intelligence can emerge from the interaction of simple agents following stimulus-response rules, without complex internal models, as seen in ant colonies.
What is Heuristics?
A heuristic is a technique that is used to solve a problem faster than the classic methods. These techniques are used to find the approximate solution of a problem when classical methods do not. Heuristics are said to be the problem-solving techniques that result in practical and quick solutions.
Heuristics are strategies that are derived from past experience with similar problems. Heuristics use practical methods and shortcuts used to produce the solutions that may or may not be optimal, but those solutions are sufficient in a given limited timeframe.
History
Psychologists Daniel Kahneman and Amos Tversky have developed the study of Heuristics in human decision-making in the 1970s and 1980s. However, this concept was first introduced by the Nobel Laureate Herbert A. Simon, whose primary object of research was problem-solving.
Why do we need heuristics?
Heuristics are used in situations in which there is the requirement of a short-term solution. On facing complex situations with limited resources and time, Heuristics can help the companies to make quick decisions by shortcuts and approximated calculations. Most of the heuristic methods involve mental shortcuts to make decisions on past experiences.
Heuristic techniques
The heuristic method might not always provide us the finest solution, but it is assured that it helps us find a good solution in a reasonable time.
Based on context, there can be different heuristic methods that correlate with the problem's scope. The most common heuristic methods are - trial and error, guesswork, the process of elimination, historical data analysis. These methods involve simply available information that is not particular to the problem but is most appropriate. They can include representative, affect, and availability heuristics.
We can perform the Heuristic techniques into two categories:
Direct Heuristic Search techniques in AI
It includes Blind Search, Uninformed Search, and Blind control strategy. These search techniques are not always possible as they require much memory and time. These techniques search the complete space for a solution and use the arbitrary ordering of operations.
The examples of Direct Heuristic search techniques include Breadth-First Search (BFS) and Depth First Search (DFS).
Weak Heuristic Search techniques in AI
It includes Informed Search, Heuristic Search, and Heuristic control strategy. These techniques are helpful when they are applied properly to the right types of tasks. They usually require domain-specific information.
The examples of Weak Heuristic search techniques include Best First Search (BFS) and A*.
The document discusses using the Extended Promethee II (EXPROM II) method to determine the best student among alternatives based on multiple criteria. The EXPROM II method is an extension of the Promethee II method for multi-criteria decision making. It involves normalizing data, calculating preference indexes through pairwise comparisons of alternatives, and determining leaving and entering flows to rank alternatives. The document provides an example of applying the EXPROM II method to select the best student among four alternatives based on GPA, leave status, semester performance, and organizational activities.
محاضرات متقدمة تدرس لطلاب حاسبات بنى سويف السنة الثالثة لتنمية قدراتهم البحثية وهذة الموضوعات تدرس على مستوى الدكتوراة - - نريد تميز طلاب حاسبات ليتميزو فى البحث العلمى -
Correlation, causation and incrementally recommendation problems at netflix ...Roelof van Zwol
Within Netflix, personalization is a key differentiator, helping members to quickly discover new content that matches their taste. Done well, it creates an immersive user experience, however when the recommendation is out of tune, it is immediately noticed by our members. During this presentation I will cover some of the personalization and recommendation tasks that jointly define the Netflix user experience that entertains more that 130M members world wide. In particular, I will focus on several of the algorithmic challenges related to the launch of new Netflix originals in the service, and go over concepts such as causality, incrementality and explore-exploit strategies.
The research presented in this talk represents the collaborative efforts of a team of research scientists and engineers at Netflix on our journey to create best in class user experiences.
Heuristic search algorithms use heuristics, or problem-specific knowledge, to guide the search for a solution. Some heuristics guarantee completeness while others may sacrifice completeness to improve efficiency. A heuristic function estimates the cost to reach the goal state from the current state. For example, in the 8-puzzle problem the Manhattan distance heuristic estimates this cost as the sum of the distances each misplaced tile would need to move to reach its goal position. The example shows applying the Manhattan distance heuristic to guide the search for a solution to instances of the 8-puzzle problem.
An ad network aggregates ad inventory from publishers and sells it to advertisers. It acts as a broker between the two parties. Ad networks make the process convenient by matching advertiser demand with publisher supply. When a match occurs, the ad details are sent to the publisher to display. This allows advertisers access to premium inventory at scale, while publishers can generate revenue immediately by listing their inventory. Major ad networks include Google AdSense, PropellerAds, and Adcash. [/SUMMARY]
Display Advertising 2.0 - Arrival of Real Time Biddinggauravjain000
The document discusses the arrival of real-time bidding (RTB) as a game changer for digital display advertising. RTB allows media buyers to evaluate and bid on ad impressions in real-time auctions, enabling more precise targeting of audiences at scale. This shift from static pricing to dynamic auctions based on predictive models of conversion probabilities represents a fundamental change in how display inventory is bought and sold. The document cites evidence that RTB is driving rapid growth in non-guaranteed digital ad spending in markets like the US. It also outlines benefits of RTB for advertisers, publishers and consumers, and provides an example case study demonstrating improved campaign metrics from an RTB-enabled platform in India.
The document discusses scheduling algorithms for hardware synthesis. It describes different types of scheduling problems including unconstrained scheduling, scheduling with timing constraints, and scheduling with resource constraints. It provides examples of algorithms for each type, such as ASAP for unconstrained scheduling, and Bellman-Ford and Liao-Wong algorithms for scheduling under detailed timing constraints. The goal of scheduling is to assign start times to operations under given constraints to optimize area and latency.
I. FSSP(Progression Planner) II. BSSP(Regression Plannervikas dhakane
The document discusses forward state space planning (FSSP) and backward state space planning (BSSP). FSSP, also called progression planning, starts from the initial state and plans a sequence of actions to reach the goal state. BSSP, also called regression planning, starts from the goal state and works backward to determine which actions could have led to that state. The document provides an example of using both FSSP and BSSP to forecast weather based on given rules and facts about temperature, humidity, clouds, etc.
This document provides an overview of logical agents and knowledge representation. It discusses knowledge-based agents and the Wumpus world example. It introduces propositional logic and various inference techniques like forward chaining, backward chaining, and resolution that can be used for automated reasoning. It also discusses concepts like entailment, models, validity, and satisfiability. Finally, it discusses how these logical concepts can be applied to build an agent that reasons about the Wumpus world using propositional logic.
The document provides an overview of recommender systems. It discusses the typical architecture of recommender systems and describes three main types: collaborative filtering systems, content-based systems, and knowledge-based systems. It also covers paradigms like collaborative filtering, content-based, knowledge-based, and hybrid recommender systems. The document then focuses on collaborative filtering techniques like user-based nearest neighbor collaborative filtering and item-based collaborative filtering. It also discusses latent factor models, matrix factorization approaches, and context-based recommender systems.
This document discusses neural graph collaborative filtering. It begins with an overview of recommender systems and collaborative filtering. It then discusses latent factor models, including matrix factorization and neural collaborative filtering (NCF). A key development was neural graph collaborative filtering (NGCF), which uses graph neural networks to model high-order connectivity in the user-item graph to address sparsity issues. NGCF aims to learn better embeddings by propagating user and item information through the graph.
The document discusses local search algorithms, including gradient descent, the Metropolis algorithm, simulated annealing, and Hopfield neural networks. It provides details on how each algorithm works, such as gradient descent taking steps proportional to the negative gradient of a function to find a local minimum. The algorithms are compared, with some having similarities in their methods, like maximum cut problem and Hopfield neural networks using state flipping algorithms, and Metropolis and gradient descent using simulated annealing. Advantages and disadvantages of local search algorithms are presented.
Paper review: Measuring the Intrinsic Dimension of Objective Landscapes.Wuhyun Rico Shin
This document summarizes a paper that proposes measuring the intrinsic dimension (dint) of neural networks as a way to understand how many parameters are truly needed. It trains networks in randomly oriented lower-dimensional subspaces and finds dint as the dimension where solutions first appear. The paper applies this to CNNs and FCs on MNIST, finding dint is much smaller than parameters and robust across models. It also uses dint to compare task difficulties, finding reinforcement learning environments easier than image classification.
This document provides an overview of different agent architectures, including reactive, deliberative, and hybrid architectures. It discusses key concepts like the types of environments agents can operate in, including accessible vs inaccessible, deterministic vs non-deterministic, episodic vs non-episodic, and static vs dynamic environments. Reactive architectures are focused on fast reactions to environmental changes with minimal internal representation and computation. Deliberative architectures emphasize long-term planning and goal-driven behavior using symbolic representations. Rodney Brooks proposed that intelligence can emerge from the interaction of simple agents following stimulus-response rules, without complex internal models, as seen in ant colonies.
What is Heuristics?
A heuristic is a technique that is used to solve a problem faster than the classic methods. These techniques are used to find the approximate solution of a problem when classical methods do not. Heuristics are said to be the problem-solving techniques that result in practical and quick solutions.
Heuristics are strategies that are derived from past experience with similar problems. Heuristics use practical methods and shortcuts used to produce the solutions that may or may not be optimal, but those solutions are sufficient in a given limited timeframe.
History
Psychologists Daniel Kahneman and Amos Tversky have developed the study of Heuristics in human decision-making in the 1970s and 1980s. However, this concept was first introduced by the Nobel Laureate Herbert A. Simon, whose primary object of research was problem-solving.
Why do we need heuristics?
Heuristics are used in situations in which there is the requirement of a short-term solution. On facing complex situations with limited resources and time, Heuristics can help the companies to make quick decisions by shortcuts and approximated calculations. Most of the heuristic methods involve mental shortcuts to make decisions on past experiences.
Heuristic techniques
The heuristic method might not always provide us the finest solution, but it is assured that it helps us find a good solution in a reasonable time.
Based on context, there can be different heuristic methods that correlate with the problem's scope. The most common heuristic methods are - trial and error, guesswork, the process of elimination, historical data analysis. These methods involve simply available information that is not particular to the problem but is most appropriate. They can include representative, affect, and availability heuristics.
We can perform the Heuristic techniques into two categories:
Direct Heuristic Search techniques in AI
It includes Blind Search, Uninformed Search, and Blind control strategy. These search techniques are not always possible as they require much memory and time. These techniques search the complete space for a solution and use the arbitrary ordering of operations.
The examples of Direct Heuristic search techniques include Breadth-First Search (BFS) and Depth First Search (DFS).
Weak Heuristic Search techniques in AI
It includes Informed Search, Heuristic Search, and Heuristic control strategy. These techniques are helpful when they are applied properly to the right types of tasks. They usually require domain-specific information.
The examples of Weak Heuristic search techniques include Best First Search (BFS) and A*.
The document discusses using the Extended Promethee II (EXPROM II) method to determine the best student among alternatives based on multiple criteria. The EXPROM II method is an extension of the Promethee II method for multi-criteria decision making. It involves normalizing data, calculating preference indexes through pairwise comparisons of alternatives, and determining leaving and entering flows to rank alternatives. The document provides an example of applying the EXPROM II method to select the best student among four alternatives based on GPA, leave status, semester performance, and organizational activities.
محاضرات متقدمة تدرس لطلاب حاسبات بنى سويف السنة الثالثة لتنمية قدراتهم البحثية وهذة الموضوعات تدرس على مستوى الدكتوراة - - نريد تميز طلاب حاسبات ليتميزو فى البحث العلمى -
Correlation, causation and incrementally recommendation problems at netflix ...Roelof van Zwol
Within Netflix, personalization is a key differentiator, helping members to quickly discover new content that matches their taste. Done well, it creates an immersive user experience, however when the recommendation is out of tune, it is immediately noticed by our members. During this presentation I will cover some of the personalization and recommendation tasks that jointly define the Netflix user experience that entertains more that 130M members world wide. In particular, I will focus on several of the algorithmic challenges related to the launch of new Netflix originals in the service, and go over concepts such as causality, incrementality and explore-exploit strategies.
The research presented in this talk represents the collaborative efforts of a team of research scientists and engineers at Netflix on our journey to create best in class user experiences.
Heuristic search algorithms use heuristics, or problem-specific knowledge, to guide the search for a solution. Some heuristics guarantee completeness while others may sacrifice completeness to improve efficiency. A heuristic function estimates the cost to reach the goal state from the current state. For example, in the 8-puzzle problem the Manhattan distance heuristic estimates this cost as the sum of the distances each misplaced tile would need to move to reach its goal position. The example shows applying the Manhattan distance heuristic to guide the search for a solution to instances of the 8-puzzle problem.
An ad network aggregates ad inventory from publishers and sells it to advertisers. It acts as a broker between the two parties. Ad networks make the process convenient by matching advertiser demand with publisher supply. When a match occurs, the ad details are sent to the publisher to display. This allows advertisers access to premium inventory at scale, while publishers can generate revenue immediately by listing their inventory. Major ad networks include Google AdSense, PropellerAds, and Adcash. [/SUMMARY]
Display Advertising 2.0 - Arrival of Real Time Biddinggauravjain000
The document discusses the arrival of real-time bidding (RTB) as a game changer for digital display advertising. RTB allows media buyers to evaluate and bid on ad impressions in real-time auctions, enabling more precise targeting of audiences at scale. This shift from static pricing to dynamic auctions based on predictive models of conversion probabilities represents a fundamental change in how display inventory is bought and sold. The document cites evidence that RTB is driving rapid growth in non-guaranteed digital ad spending in markets like the US. It also outlines benefits of RTB for advertisers, publishers and consumers, and provides an example case study demonstrating improved campaign metrics from an RTB-enabled platform in India.
Understanding Real-Time Bidding (RTB) From the Publisher Perspectivestephsfo
This document provides an overview of real-time bidding (RTB) from the publisher perspective. It discusses how RTB can increase the value of ad impressions by providing greater transparency about individual users in real-time. With RTB, advertisers can bid on specific impressions based on unique user characteristics rather than general audience segments, allowing publishers to maximize revenue. The document also notes potential pitfalls for publishers and estimates that RTB will grow to power 3-5% of online ads in 2010, up from less than 1% in 2009.
Data-driven Reserve Prices for Social Advertising Auctions at LinkedInKun Liu
Online advertising auctions constitute an important source of revenue for search engines such as Google and Bing, as well as social networks such as Facebook, LinkedIn and Twitter. We study the problem of setting the optimal reserve price in a Generalized Second Price auction, guided by auction theory with suitable adaptations to social advertising at LinkedIn. Two types of reserve prices are deployed: one at the user level, which is kept private by the publisher, and the other at the audience segment level, which is made public to advertisers. We demonstrate through field experiments the effectiveness of this reserve price mechanism to promote demand growth, increase ads revenue, and improve advertiser experience.
How Ad Effectiveness Tools Can Help Optimize Your Media Strategy May 2012Compete
The document provides an overview of a webinar presented by Compete, an advertising effectiveness solutions company. The webinar discusses how Compete uses a panel-based measurement approach to analyze advertising campaigns and maximize effectiveness. It highlights Compete's large consumer panel, methodology for comparing exposed versus unexposed groups, and ability to measure outcomes across the marketing funnel. An example case study shows how Compete was able to analyze view-through rates, upper funnel activity, and branded search for a display campaign to optimize performance.
Measuring Performance in Advertising EffectivenessPerformanceIN
This document discusses measuring advertising effectiveness through attribution. It begins by explaining that attribution can mean different things, such as digital attribution focusing on view-through conversions or cross-channel attribution looking at optimal channel mixes. The document then outlines Neo@Ogilvy's approach to driving performance through attribution by beginning with digital attribution tests and models before expanding to cross-channel effectiveness studies and mix models. It emphasizes starting simple by measuring view-through lifts before using more advanced algorithmic attribution models. The document concludes by discussing opportunities for personalization through social CRM and revenue management solutions.
Creating a social media strategy for a tourism business | Block 1: Basics of ...Francisco Hernandez-Marcos
Creating a social media strategy for a tourism business
Block 1: Basics of online marketing
International Master in Hospitality and Tourism Management
ESCP Europe - Cornell University School of Hotel Administration
A short presentation to discuss the online advertising network industry and to illustrate some of the ways in which it can help a site develop its initial revenues.
Eyeblaster Research Note Cpc Curtail The Growth Display AdvertisingEyeblaster Spain
The document discusses how cost-per-click (CPC) pricing models for display advertising may curtail future growth of the industry. It finds that switching primarily to CPC could reduce publisher revenues as lower-performing campaigns would pay less while higher-performing campaigns would pay much more. This could drive some publishers out of business and reduce inventory, forcing remaining publishers to increase CPC prices and lose some campaigns to other channels. Overall, widespread use of CPC pricing could seriously damage the online advertising industry's growth prospects over the long run.
The document provides an overview of bidding and conversions tools in Google AdWords, including a bid simulator, conversion tracking, and conversion optimizer. The bid simulator allows advertisers to estimate how clicks, costs, and impressions would change with different maximum cost-per-click bids. Conversion tracking enables advertisers to measure the ROI of campaigns by correlating ad clicks to sales and other conversions. The conversion optimizer is a free tool that uses conversion data to automatically optimize cost-per-acquisition bids to maximize conversions and reduce costs.
The document discusses online advertising and how to become a publisher or advertiser on a banner advertising platform. It explains key concepts like impressions, contextual targeting, geo-targeting, and how advertisers pay for banner ads to be shown on publisher websites. It promotes joining the platform by purchasing advertising-publishing bundles and provides estimates of potential monthly earnings.
Evolution of Display Advertising – From GIF to RTBEzhil Raja
The document traces the evolution of display advertising from early static banner ads to today's real-time bidding landscape, noting key developments like rich media, video ads, the rise of ad networks and exchanges, and the growing role of programmatic buying. It also discusses trends in the global and Indian markets, predicting further consolidation and increased adoption of RTB as digital marketing ecosystems mature.
Why Car Dealers Should Choose Display Advertising Versus Paid Search AdvertisingRalph Paglia
Paid search advertising will not replace display advertising because:
1) Paid search inventory is limited to small pools of keywords relevant to few advertisers, while display advertising reaches thousands more brands.
2) Large advertisers require reaching customers at every stage of the purchase funnel, from awareness to purchase, and paid search only addresses later stages.
3) Most advertiser budgets are too large to allocate solely to paid search given its limited inventory. Display advertising allows spreading spending across many more brands and platforms.
Paid search advertising will not replace display advertising because:
1) Paid search inventory is limited to small pools of keywords relevant to few advertisers, while display advertising reaches thousands more brands.
2) Large advertisers require reaching customers at every stage of the purchase funnel, from awareness to purchase, and paid search only addresses later stages.
3) Most advertiser budgets are too large to allocate solely to paid search given its limited inventory. Display advertising allows spreading spending across many more brands and platforms.
The document discusses different bidding strategies for online advertising campaigns, including cost-per-click (CPC), cost-per-thousand impressions (CPM), and conversion optimization. It explains that CPC bidding is best for getting website traffic, CPM bidding focuses on brand awareness, and conversion optimization aims to get customer actions like leads or purchases. The document also provides tips for implementing bidding strategies like starting with automatic CPC bidding, using conversion tracking, and analyzing keyword performance to optimize bids.
Ray owns a pizzeria and created his first Facebook ads campaign. To optimize the campaign, he tests different ad creatives targeting specific customer segments. Ray analyzes reports in Ads Manager to see which ads perform best. The responder demographic report shows males 18-25 have the highest click-through rate for an ad offering a campus pizza deal. Ray continues testing ads and analyzing reports to identify high-performing creatives and reallocate budget accordingly.
DIGITAL MARKETING,Ad Operations & TERMS,AD CAMPAIGN,REVENUE UNITS IN DIGITAL ADVERTISING,LIFE OF AN AD CALL,VPAID,TYPE OF CREATIVES,TARGETING,AD TYPES
ad selection Algorithm,Types of Advertising,SALES FUNNEL,REMNANT INVENTORY
Top 10 Best Ad Networks for Publishers by @monetizemoreMonetizeMore
There are countless ad networks and sifting through them all to find the top ad networks for publishers is difficult. Direct selling to advertisers requires a high amount of resources, resources most publishers don’t have. So, most publishers opt to partner with ad networks to fill ad inventory to monetize their sites. Here are our Top 10 ad network picks for 2016.
Header bidding - Everything you need to knowAutomatad Inc.
Header Bidding is an advanced programmatic technique which allows you to call multiple demand partners or bidders to bid on your ad inventory, before sending the ad request to the Ad server.
This increases CPMs and Yield of your inventories. It has been used by 71% of the US publishers and proven method of increasing your Ad revenue by 60%.
In this slide, you'll learn 'What is Header Bidding', 'How header bidding works', the current state of the technology, and more.
At Automatad Inc, we've deployed our 'Header Bidding' solution to 100+ publishers across the globe and seen a drastic increase in the revenue.
Similar to Sponsored Search Markets (from Networks, Crowds, and Markets: Reasoning About a Highly Connected World) (20)
Frequency-based Constraint Relaxation for Private Query Processing in Cloud D...Junpei Kawamoto
This document proposes a frequency-based constraint relaxation methodology for private queries in cloud databases. It aims to reduce computational costs for servers while maintaining privacy risks below existing "complete" protocols. The approach relaxes the constraint that servers must check all database items for a query by instead checking a subset, or "handled set", based on search intention frequencies. Evaluation on a real dataset found the approach reduces average query costs to 6.5% of complete protocols while keeping privacy risks comparable.
Securing Social Information from Query Analysis in Outsourced DatabasesJunpei Kawamoto
The document presents two methods for securing social information when databases are outsourced: Query Generalization by Dynamic Hash and Result Generalization by Bloom Filter. Queries are generalized through dynamic hashing to mix user queries and prevent determining relationships. Results are generalized using Bloom filters by including irrelevant tuples to mask which users requested the same tuples. The goal is to protect users' social information and relationships from being discovered by the outsourced database provider. Future work involves implementing and evaluating these methods on a real outsourced database service.
This document presents a study on privacy-preserving techniques for continually publishing location data. The authors propose a new definition of adversarial privacy that aims to preserve utility of published histograms while guaranteeing privacy against adversaries. They assume people's movements follow a Markov process and evaluate their approach on change point detection and frequent path extraction tasks, finding it achieves better utility than differential privacy while providing privacy guarantees.
A Locality Sensitive Hashing Filter for Encrypted Vector DatabasesJunpei Kawamoto
This document describes a locality sensitive hashing (LSH) filter for encrypted vector databases. The LSH filter allows cloud servers to filter database tuples without decrypting encrypted data, improving query efficiency. A whitening transformation is applied to encrypted vectors before inserting them into the LSH index to reduce skew in the vector space. When a query is received, the server computes the LSH of the encrypted query vector to identify candidate tuple groups for similarity checking based on the LSH hash values. Tuples with low estimated similarity are skipped, while those with high similarity have their actual encrypted similarity computed.
Private Range Query by Perturbation and Matrix Based EncryptionJunpei Kawamoto
This document proposes a new method called Inner Product Predicate (IPP) for performing private range queries over encrypted data. The IPP method adds perturbations to attribute values and queries through matrix-based encryption to prevent frequency analysis attacks. Experimental results show the transformed query distributions are different from the originals and query processing time is linear in the number of tuples. Open problems remain around reducing computational costs and defending against attacks using aggregate query results.
VLDB2009のSession27より,
1) Anonymization of Set-Valued Data via Top-Down, Local Generalization (He and Naughton)
2) K-Automorphism: A General Framework For Privacy Preserving Network Publication (Zou, Chen, and Özsu)
3) Distribution-based Microdata Anonymization (Koudas, Srivastava, Yu, Zhang)
を簡単に紹介.
VLDB2009勉強会: http://qwik.jp/vldb2009-study/
Reducing Data Decryption Cost by Broadcast Encryption and Account Assignment ...Junpei Kawamoto
The document proposes two methods for reducing data decryption costs while preserving social information for web applications. The naive method encrypts access control lists but results in long key chains and high decryption costs. The proposed method uses broadcast encryption to encrypt authority information in one encryption and account assignment to reduce decryption candidates, lowering costs and improving precision. Simulation results show the proposed method maintains near-perfect precision independent of user numbers while requiring only one decryption.
Security of Social Information from Query Analysis in DaaSJunpei Kawamoto
This document discusses protecting social information inferred from database queries. It introduces the problem of attackers extracting social relationships between users from query logs. An attack model is presented where users querying similar topics are assumed to be related. The document then proposes a method of query conversion using an extendible hashing tree to map original queries to generalized queries. This aims to prevent determining the exact original queries and therefore the social relationships between users from the query logs. Evaluation of the method shows it can significantly reduce the precision and recall of the attack in determining social information.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Operating System Used by Users in day-to-day life.pptx
Sponsored Search Markets (from Networks, Crowds, and Markets: Reasoning About a Highly Connected World)
1. Reading seminar
Chapter 15 Sponsored Search Markets
from Networks, Crowds, and Markets: Reasoning About a
Highly Connected World
Junpei Kawamoto
2. Advertising Tied to Search Behavior
Search engine meets search advertising
Early advertising was sold on the basis of “impression” like
magazine or newspapers ad.
It was not efficient way
Showing ad of calligraphy pens to all people vs. to people searching
“calligraphy pen”.
Search engine queries are potent way to get users to express
their intent.
Keywords-bases advertisement.
How to sell keywords-based ad?
2 Chapter 15 - Sponsored Search Markets
3. Advertising Tied to Search Behavior
Paying per click
Cost-per-click (CPC) model
You only pay when a user actually clicks on the ad.
Clicking on an ad represents an even stronger indication of
intent than simply issuing a query.
Setting prices through an auction
How should a search engine set the prices pre click for
different queries?
One easy way is the search engine posts prices.
However, there are lots of queries and impossible to decide prices for
those all queries.
Auction protocols is a good way to decide prices.
3 Chapter 15 - Sponsored Search Markets
4. Overview
1. Advertising as a Matching Market
2. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
3. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
4. The Generalized Second Price Auction
5. Equilibria of the Generalized Second Price Auction
6. Ad Quality
7. Complex Queries and Interactions Among Keywords
4 Chapter 15 - Sponsored Search Markets
5. Advertising as a Matching Market
Click through Rates and Revenues Per Click.
click through revenues
slots advertisers
rates per click
10 a x 3
5 b y 2
2 c z 1
5 Chapter 15 - Sponsored Search Markets
6. Advertising as a Matching Market
Click through Rates and Revenues Per Click.
click through revenues
slots advertisers
rates per click
10 a x 3
Click through rates: e.g. # of clicks per hour.
5
Assuming: b y 2
1. Advertisers know the click through rates;
2. Click through rates depend only on slot not contents of the ads;
3. Click through rates don’t depend on ads on other slots.
2 c z 1
6 Chapter 15 - Sponsored Search Markets
7. Advertising as a Matching Market
Click through Rates and Revenues Per Click.
click through revenues
slots advertisers
rates per click
10 a x 3
The expected amount of revenues advertisers receive per user clicking on the ad.
Assuming: 5 b y 2
1. These values are intrinsic to the advertisers,
2. doesn’t depend on what was being shown on the page.
2 c z 1
7 Chapter 15 - Sponsored Search Markets
8. Advertising as a Matching Market
Constructing a Matching Market
The basic ingredients of matching market from Chapter 10.
The participants in a matching market consists of a set of
buyers and a set of sellers.
Each buyer j has a valuation for the item offered by each seller
i. This valuation can depend on the identities of both the buyer
and the seller, and we denote vij.
The goal is to match up buyers with sellers, in such a way that
no buyers purchases two different items, and the same item
isn’t sold two different buyers.
8 Chapter 15 - Sponsored Search Markets
9. Advertising as a Matching Market
Constructing a Matching Market
ri: the click through rate of slot i
vj: the revenue per click of advertiser j
vij = rivj: the benefit advertiser j receives from an ad in slot i
Advertiser j’s valuation for slot i in the language of matching market,
Slots = Sellers
Advertisers = Buyer
The problem is to assign sellers to buyers in a matching market.
9 Chapter 15 - Sponsored Search Markets
10. Advertising as a Matching Market
Constructing a Matching Market
Advertisers’ valuations for the slots
click through revenues
slots advertisers valuations
rates per click
10 a x 3 30, 15, 6
5 b y 2 20, 10, 4
2 c z 1 10, 5, 2
10 Chapter 15 - Sponsored Search Markets
11. Advertising as a Matching Market
Constructing a Matching Market
Advertisers’ valuations for the slots
click through revenues
slots advertisers valuations
rates per click
10 a x 3 30, 15, 6
5 b y 2 20, 10, 4
2 c z 1 10, 5, 2
the benefit advertiser j receives from an ad in slot i, vij = rivj
11 Chapter 15 - Sponsored Search Markets
12. Advertising as a Matching Market
Constructing a Matching Market
Advertisers’ valuations for the slots
click through revenues
slots advertisers valuations
rates per click
10 a x 3 30, 15, 6
5 b y 2 20, 10, 4
2 c z 1 10, 5, 2
the benefit advertiser j receives from an ad in slot i, vij = rivj
12 Chapter 15 - Sponsored Search Markets
13. Advertising as a Matching Market
Constructing a Matching Market
That is a matching market with a special structure
The valuations of one buyer simply form a multiple of the valuations
of any other buyers.
Assumption:
# of slots = # of advertisers
If not, we can add fictitious slots or advertises.
The click through rates of fictitious slots = 0
The revenue pre clicks of fictitious advertises = 0
13 Chapter 15 - Sponsored Search Markets
14. Advertising as a Matching Market
Obtaining Market-Clearing Prices
Using the framework from Chapter 10
Each seller i announces a price pi for his item.
Each buyer j evaluates her payoff for choosing a particular seller i
it’s equal to the valuation minus the price for this seller’s item, vij – pi
We then build a perfect-seller graph by linking each buyers to the
seller or sellers from which she gets the highest payoff.
The prices are market-clearing if this graph has a perfect matching
in this case, we can assign distinct items to all the buyers in such a way
that each buyers gets an item that maximizes her payoff.
14 Chapter 15 - Sponsored Search Markets
15. Advertising as a Matching Market
Obtaining Market-Clearing Prices
Market-clearing prices exists for every matching market.
We have a procedure to construct market-clearing prices.
Market-clearing prices always maximizes the buyer’s total
valuations for the items they get.
From Chapter 10
15 Chapter 15 - Sponsored Search Markets
16. Advertising as a Matching Market
Obtaining Market-Clearing Prices
Finally obtained the perfect-seller graph.
click through revenues
slots advertisers valuations
rates per click
10 a x 3 30, 15, 6
5 b y 2 20, 10, 4
2 c z 1 10, 5, 2
the benefit advertiser j receives from an ad in slot i, vij = rivj
16 Chapter 15 - Sponsored Search Markets
17. Advertising as a Matching Market
This construction of prices can only be carried out by
search engine if it actually knows the valuations of the
advertisers.
Next, we consider how to set prices in a setting where
the search engine doesn’t know these valuations; it must
rely on advertisers to report them without being able to
know whether this reporting is truthful.
17 Chapter 15 - Sponsored Search Markets
18. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
What would be a good price-setting procedure when the search
engine doesn’t know advertises’ valuations?
In the early days, variants of the 1st-price auction used.
Recall from Chapter 9, in 1st-price auction, bidders under-report.
In the case of a single-item auction, 2nd-price auction is a solution.
Truthful bidding is a dominant strategy.
Avoid many of the pathologies associated with more complex auctions.
What is the analogue of the 2nd-price auction for advertising
markets with multiple slots?
How can we define a price-setting procedure for matching markets so
that truthful reporting of valuations is a dominant strategy for buyers?
18 Chapter 15 - Sponsored Search Markets
19. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
The VCG Principle
How to generalize 2nd-price single-item auction to multi-item one.
A less obvious view of 2nd-price auction
The 2nd-price auction produces an allocation that maximizes social welfare.
The winner is charged an amount equal to the “harm” he causes the other
bidders by receiving the item.
1 Gets the apple
v1
v2 2
vn
n
v1 > v 2 > … > vn
19 Chapter 15 - Sponsored Search Markets
20. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
The VCG Principle
How to generalize 2nd-price single-item auction to multi-item one.
A less obvious view of 2nd-price auction
The 2nd-price auction produces an allocation that maximizes social welfare.
The winner is charged an amount equal to the “harm” he causes the other
bidders by receiving the item.
1 If bidder 1 weren’t present
v1
v2 2 Bidder 2 got the apple, which
was worth of v2
vn Nothing was changed for
bidder 3-n. (we ignore them)
n
v1 > v 2 > … > vn
20 Chapter 15 - Sponsored Search Markets
21. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
The VCG Principle
How to generalize 2nd-price single-item auction to multi-item one.
A less obvious view of 2nd-price auction
The 2nd-price auction produces an allocation that maximizes social welfare.
The winner is charged an amount equal to the “harm” he causes the other
bidders by receiving the item.
1 The presence of bidder 1
v1 causes the “harm” v2 to
v2 2 bidder 2. (Bidder 2 lost an
item worthy of v2)
Therefore, bidder 1 must
vn pay v2 as the price of the
apple.
n
v1 > v 2 > … > vn
21 Chapter 15 - Sponsored Search Markets
22. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
The VCG Principle
How to generalize 2nd-price single-item auction to multi-item one.
A less obvious view of 2nd-price auction
The 2nd-price auction produces an allocation that maximizes social welfare.
The winner is charged an amount equal to the “harm” he causes the other
bidders by receiving the item.
Vickrey-Clarke-Groves (VCG) principles
22 Chapter 15 - Sponsored Search Markets
23. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
We have a set of buyers and sellers
# of buyers = # of sellers. (if not, we can add fictitious buyers and sellers.)
A buyer j has a valuation vij for the item i.
Each buyer knows only their own valuations.
independent, private values
Each buyer doesn’t care anyone else.
Procedure
Assign items to buyers so as to maximize total valuations.
Compute price with the VCG principles
23 Chapter 15 - Sponsored Search Markets
24. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
For example
price slots advertisers valuations
a x 30, 15, 6
30 for a, 15 for b, 6 for c
b y 20, 10, 4
c z 10, 5, 2
24 Chapter 15 - Sponsored Search Markets
25. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
For example
price slots advertisers valuations
If x weren’t present
a x 30, 15, 6
y got slot a,
which was worthy of 20
b y 20, 10, 4
z got slot b,
which was worthy of 5
c z 10, 5, 2
25 Chapter 15 - Sponsored Search Markets
26. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
For example
price slots advertisers valuations
In fact, x is present.
a x 30, 15, 6
y gets b, which is worthy of
10, instead of a, which is
worth of 20. Harm = 20-10
b y 20, 10, 4
z gets c, which is worthy of
2, instead of b, which is
worth of 5. Harm = 5-2
c z 10, 5, 2
26 Chapter 15 - Sponsored Search Markets
27. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
For example
price slots advertisers valuations
10+3 = 13 In face, x is present.
a x 30, 15, 6
y gets b, which is worthy of
10, instead of a, which is
worth of 20. Harm = 20-10
b y 20, 10, 4
z gets c, which is worthy of
2, instead of b, which is
worth of 5. Harm = 5-2
c z 10, 5, 2
27 Chapter 15 - Sponsored Search Markets
28. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
For example
price slots advertisers valuations
13 a x 30, 15, 6
3 b y 20, 10, 4
0 c z 10, 5, 2
28 Chapter 15 - Sponsored Search Markets
29. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Applying the VCG Principles to Matching Markets
To generalize this pricing,
S: the set of sellers
B: the set of buyers
VSB: the maximum total valuations over all possible matching for S & B
S – i: the set of sellers without i
B – j: the set of buyers without j
VS-iB-j: the maximum total valuations of S-i & B-j
VCG price pij that we charge buyer j for item i
pij = VSB-j – VS-iB-j
Total valuations if j Total valuations in
weren’t present such a case j gets i.
29 Chapter 15 - Sponsored Search Markets
30. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
The VCG Price-Setting Procedure.
We assume there’s a single price-setting authority.
Collection information, assign items, and charge prices.
The Procedure
1. Ask buyers to announce valuations for items. (need truthful report)
2. Choose a socially optimal assignment of items to buyers. (can do it)
3. Charge each buyer the appropriate VCG price. (pij = VSB-j – VS-iB-j)
30 Chapter 15 - Sponsored Search Markets
31. Encouraging Truthful Bidding
in Matching markets: The VCG Principle
Market-clearing price vs.VCG price.
Market-clearing price is a posted price, based on English
auction.
VCG price is a personalized price and based on 2nd-price
auction.
2nd-price auction vs.VCG prices
The basic idea is “harm-done-to-others” principle.
2nd-price auction for single item,VCG prices for multi items.
2nd-price auction is a VCG price auction.
Suppose 1 real item & n – 1 fictitious items.
31 Chapter 15 - Sponsored Search Markets
32. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
We’ll show the following claim.
If items are assigned and prices computed according to
the VCG procedure, then truthfully announcing valuations
is a dominant strategy for each buyer, and the resulting
assignment maximizes the total valuations of any perfect
matching of slots and advertisers.
32 Chapter 15 - Sponsored Search Markets
33. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
We’ll show the following claim.
If items are assigned and prices computed according to
the VCG procedure, then truthfully announcing valuations
is a dominant strategy for each buyer, and the resulting
assignment maximizes the total valuations of any perfect
matching of slots and advertisers.
obvious if buyers report their valuations truthfully, then the
assignment of items is designed to maximize the total
valuations by definition.
33 Chapter 15 - Sponsored Search Markets
34. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
We’ll show the following claim.
If items are assigned and prices computed according to
the VCG procedure, then truthfully announcing valuations
is a dominant strategy for each buyer, and the resulting
assignment maximizes the total valuations of any perfect
matching of slots and advertisers.
We’ll focus the first part.
34 Chapter 15 - Sponsored Search Markets
35. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
Suppose when the buyer j announce her valuations
truthfully, then she is assigned to item i.
Buyer j’s payoff = vij – pij
If j lies, there’re two cases,
j lies but gets item i
her payoff won’t change because pij is independent with her bit.
j lies and gets item h = VSB, because we assume
assignment j to i is optimal.
her payoff is vhj – phj
vij – pij = vij – (VSB-j – VS-iB-j) = vij + VS-iB-j – VSB-j = VSB – VSB-j
vhj – phj = vhj – (VSB-j – VS-hB-j) = vhj + VS-hB-j – VSB-j ≤ VSB – VSB-j
≤ VSB
35 Chapter 15 - Sponsored Search Markets
36. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
Suppose when the buyer j announce her valuations
truthfully, then she is assigned to item i.
Buyer j’s payoff = vij – pij
If j lies, there’re two cases,
j lies but gets item i
vij – pij ≥ vhj – phj
her payoff won’t change because ij is independent with her bit.
That is, when the buyer j lies, her payoffpbecome smaller or doesn’t change.
j lies and gets item h incentiveSB, because we assume
(no = V to tell lie)
assignment j to i is optimal.
her payoff is vhj – phj
vij – pij = vij – (VSB-j – VS-iB-j) = vij + VS-iB-j – VSB-j = VSB – VSB-j
vhj – phj = vhj – (VSB-j – VS-hB-j) = vhj + VS-hB-j – VSB-j ≤ VSB – VSB-j
≤ VSB
36 Chapter 15 - Sponsored Search Markets
37. Analyzing the VCG Procedure:
Truth-Telling as a Dominant Strategy
The VCG Procedure would make buyers happy.
But it’s not clear the VCG procedure is the best way to
generate revenue for the search engine.
Determining which procedure will maximize seller revenue is a
current research topic.
37 Chapter 15 - Sponsored Search Markets
38. The Generalized Second Price Auction
the Generalized Second Price (GSP) auction (by Google)
a generalization of 2nd-price auction
it’s up to the advertiser whether they bit true valuations
For example
price slots advertisers bids
b2 a x b1
b3 b y b2
bn c z bn-1
b1 > b2 > … > bn
38 Chapter 15 - Sponsored Search Markets
39. The Generalized Second Price Auction
GSP has a number of pathologies VCG avoids:
Truth-telling might not constitute a Nash equilibrium;
There can be multiple Nash equilibrium;
Some assignments don’t maximize the total valuations.
Good news
GSP has at least one Nash equilibrium,
Among them, there is one which maximize the total valuation.
However, how to maximize search engine’s revenue is a
still current research topic.
39 Chapter 15 - Sponsored Search Markets
40. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers
rates per click
10 a x 7
4 b y 6
0 c z 1
40 Chapter 15 - Sponsored Search Markets
41. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers price
rates per click
10 a x 7 10×6
4 b y 6 4×1
0 c z 1 0
41 Chapter 15 - Sponsored Search Markets
42. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers price payoff
rates per click
10 a x 7 60 70 - 60
10×7
4 b y 6 4 24 - 4
0 c z 1 0
42 Chapter 15 - Sponsored Search Markets
43. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers bid x lies
rates per click
10 a x 7 5
4 b y 6 6
0 c z 1 1
43 Chapter 15 - Sponsored Search Markets
44. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers bid
rates per click
10 a x 7 5
4 b y 6 6
Changed
0 c z 1 1
44 Chapter 15 - Sponsored Search Markets
45. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers bid price
rates per click
10 a x 7 5 4×1
4 b y 6 6 50
0 c z 1 1 0
x is the 2nd highest bidder so
use 3rd bid for the price
45 Chapter 15 - Sponsored Search Markets
46. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers bid price payoff
rates per click
10 a x 7 5 4 28 - 4
4 b y 6 6 50 24 - 4
0 c z 1 1 0
46 Chapter 15 - Sponsored Search Markets
47. The Generalized Second Price Auction
Truth-telling may not be an equilibrium.
click through revenues
slots advertisers bid price payoff
rates per click
10 a x 7 5 4 28 - 4
When x tell truth, his payoff = 70 – 60 = 10,
When x tell lie, his payoff = 28 – 4 = 24.
4 b y 6 6 50 24 - 4
In this case, truth-telling is not an equilibrium.
0 c z 1 1 0
47 Chapter 15 - Sponsored Search Markets
48. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through
slots advertisers bid
rates
10 a x 5
4 b y 4
0 c z 2
Both are Nash equilibrium.
click through
slots advertisers bid
rates
10 a x 3
4 b y 5
0 c z 1
48 Chapter 15 - Sponsored Search Markets
49. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through revenues
slots advertisers bid price payoff
rates per clicks
10 a x 7 5 40 70-40
4 b y 6 4 8 24-8
0 c z 1 2 0 0
click through
slots advertisers bid
rates
10 a x 3
4 b y 5
0 c z 1
49 Chapter 15 - Sponsored Search Markets
50. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through revenues
slots advertisers bid price payoff
rates per clicks
10 a x 7 5→3 8 28-8
4 b y 6 4 30 60-30
0 c z 1 2 0 0
click through Less than when bid 5.
slots advertisers So not bid
motivated bidding lower.
rates
10 a x 3
4 b y 5
0 c z 1
50 Chapter 15 - Sponsored Search Markets
51. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through revenues
slots advertisers bid price payoff
rates per clicks
10 a x 7 5 8 28-8
4 b y 6 4→6 50 60-50
0 c z 1 2 0 0
click through Less than when bid 4.
slots advertisers So not bid
motivated bidding lower.
rates
10 a x 3
4 b y 5
0 c z 1
51 Chapter 15 - Sponsored Search Markets
52. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through revenues
slots advertisers bid price payoff
rates per clicks
10 a x 7 5 40 70-40
4 b y 6 4 8 24-8
0 c z 1 2 0 0
click through
slots advertisers revenues bid price payoff
rates per clicks
10 a x 7 3 4 28-4
4 b y 6 5 30 60-30
0 c z 1 1 0 0
52 Chapter 15 - Sponsored Search Markets
53. The Generalized Second Price Auction
Multiple and non-optimal equilibria.
click through revenues
slots advertisers bid price payoff
rates per clicks
10 a x 7 5 40 70-40
Total payoff
4 b y 6 4 8 24-8
46
0 c z 1 2 0 0
click through
slots advertisers revenues bid price payoff
rates per clicks
10 a x 7 3 4 28-4
Total payoff
4 b y 30 60-30
6 545
0 c z 1 1 0 0
53 Chapter 15 - Sponsored Search Markets
54. The Generalized Second Price Auction
The Revenue of GSP and VCG
The search engine’s revenue is depend on which equilibrium is
selected.
click through revenues
slots advertisers bid price revenue
rates per clicks
10 a x 7 5 40
4 b y 6 4 8 48
0 c z 1 2 0
10 a x 7 3 4
4 b y 6 5 30 34
0 c z 1 1 0
54 Chapter 15 - Sponsored Search Markets
55. The Generalized Second Price Auction
The Revenue of GSP and VCG
VCG for the same example
click through revenues
slots advertisers valuations
rates per clicks
10 a x 7 70, 28, 0
4 b y 6 60, 24, 0
0 c z 1 10, 4, 0
price revenue
10 a x 7 40
4 b y 6 4 44
0 c z 1 0
55 Chapter 15 - Sponsored Search Markets
56. The Generalized Second Price Auction
The Revenue of GSP and VCG
Does GSP or VCG provide more revenue to the search
engine?
Ans.
It’s depend on which equilibrium of the GSP the advertises use.
56 Chapter 15 - Sponsored Search Markets
57. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
Assuming # of slots = # of advertisers
click through
slots advertisers
rate
r1 1 1
r2 2 2
rn n n
Decreasing order of click Decreasing order of
through rates valuations per clicks
57 Chapter 15 - Sponsored Search Markets
58. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
Assuming # of slots = # of advertisers
click through
slots advertisers
rate
r1 1 1
r2 2 2
rn n n
Obtaining Market-Clearing
Prices (from Chapter 10)
58 Chapter 15 - Sponsored Search Markets
59. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
Think about a set of matching market prices {p1, …, pn}
We can make them (from Chapter 10)
And think about a set of price per click
p*i = pi / ri (ri: click through rate)
59 Chapter 15 - Sponsored Search Markets
60. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
p*1 ≥ p*2 ≥ … ≥ p*n
To see why that is true, we think p*j & p*k (j < k) (Goal: p*j ≤ p*k)
In slot k, total payoff is (vk – p*k)rk
In slot j, total payoff is (vk – p*j)rj
If rj > rk but slot k is preferred, then payoff of k ≥ payoff of j
(vk – p*k)rk ≥ (vk – p*j)rj → vk – p*k ≥ vk – p*j → p*j ≥ p*k
If rk ≥ rj
pj ≥ pk → p *j ≥ p*k
60 Chapter 15 - Sponsored Search Markets
61. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
We simply have advertiser j place a bid of p*j-1
And advertiser 1 place any bid larger than p*1
slots advertisers bid
1 1 x (>p*1)
2 2 p*1
n n p*n-1
61 Chapter 15 - Sponsored Search Markets
62. Equilibria of the Generalized Second Price
Auction
GSP always has a Nash equilibrium
We simply have advertiser j place a bid of p*j-1
And advertiser 1 place any bid larger than p*1
price
slots advertisers bid
p*1 1 1 x (>p*1)
p*2 2 2 p*1
p*n = 0 n n p*n-1
62 Chapter 15 - Sponsored Search Markets
63. Equilibria of the Generalized Second Price
Auction
Why do the bids form a Nash equilibrium?
Considering an advertise j in slot j,
If it were to lower its bid and move to slot k, then its price would be p*k
But since the prices are market-clearing, j is at least as happy with its
current slot at its current prices as it would be with k’s current slot at k’s
current price.
If the advertiser j bids higher and move to slot i, the bid = p*i
price bid
p*i+1 i i p*i Advertiser j must pay higher that the
current price of slot i
price bid
p*i i j x > p*i
p*i+1 i+1 i p*i
63 Chapter 15 - Sponsored Search Markets
64. Equilibria of the Generalized Second Price
Auction
Why do the bids form a Nash equilibrium?
Therefore the below bids is a Nash equilibrium
price
slots advertisers bid
p*1 1 1 x (>p*1)
p*2 2 2 p*1
p*n= 0 n n p*n-1
64 Chapter 15 - Sponsored Search Markets
65. Ad Quality
The assumption of a fixed click through rate.
We assumed fixed click through rates rj
In practice, it’s also depend on the contents of ad.
Search engine worried about a low-quality advertiser bids
very highly.
The click through rate will be small.
The revenue of the search engine also will be small!
65 Chapter 15 - Sponsored Search Markets
66. Ad Quality
The role of ad quality.
Uses an estimated ad quality factor qj for advertiser j.
The click through rate is qjri (instead of ri)
(if qi = 1 for all i, it is previous model we discussed. )
The valuations of advertiser j for slot i, vij = qjrivj
The bid bj of advertiser j is changed to pjbj
The price is the minimum bid the advertiser would need in
order to hold his current position.
The analysis of this version’s GSP is equal to normal GSP.
66 Chapter 15 - Sponsored Search Markets
67. Ad Quality
The mysterious nature of ad quality.
How is ad quality computed?
Can estimate by actually observing the click through rate of the ad.
But search engines don’t tell the actual way.
How does the behavior of a matching market such as this one
change when the precise rules of the allocation procedure are
being kept secret?
A still topic for potential research.
67 Chapter 15 - Sponsored Search Markets
68. Complex Queries and Interactions Among
Keywords
Example 1.
A company selling ski vacation package to Switzerland.
“Switzerland”, “Swiss vacations”, “Swiss hotels”, …
With a fixed budget, how should the company go about
dividing its budget across different keywords?
Still challenging problem
† is a current research working on this problem.
†Paat Rusmevichientong and David P. Williamson. An adaptive algorithm for selecting
profitable keywords for search-based advertising services. In Proc. 7th ACM
Conference on Electronic Commerce, pages 260–269, 2006.
68 Chapter 15 - Sponsored Search Markets
69. Complex Queries and Interactions Among
Keywords
Example 2.
Some users query “Zurich ski vacation trip December”
No advertiser bids this key words.
Which ads should the search engine show?
Even if relevant advertisers can be identified, how much should
they be charged for a click?
Existing search engines get some agreements.
This is very interesting potential further research.
69 Chapter 15 - Sponsored Search Markets