Regarding Value Based Bidding (VBB):
- What is it?
- What THEY don’t tell you
- 4 Revealing case studies to inform:
- Data perspective
- Business perspective
- Google Ads operations
- Campaign strategy
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
UNVEILING VALUE-BASED BIDDING SECRETS FOR MAXIMUM EFFICIENCY GAIN.pptx
1. UNLOCK GOOGLE MARKETING
PLATFORM'S POWER:
UNVEILING VALUE-BASED
BIDDING SECRETS FOR
MAXIMUM EFFICIENCY GAIN
Inside scoop on the non-obvious success tactics without clickbait
3. What’s coming up in the next half hour?
● What is it?
● What they don’t tell you
● 4 Revealing case studies to inform:
○ Data perspective
○ Business perspective
○ Google Ads operations
○ Campaign strategy
6. Value Based Bidding for campaign optimisation
https://www.thinkwithgoogle.com/intl/en-emea/marketing-strategies/automation/bidding-for-value-automation/
8. Do this first
The need for clean, high quality data is
obvious and cannot be overstated
● Privacy first
● Regulatory compliant
● Lighter client side
● Cleaner
● First party
● Consented
● Modelled
#TAKEAWAY
9. Data Quality Considerations
● Volume
○ Minima
● Completeness
○ Conversion rate
● Consistency
○ Data types
○ Outliers
○ Volatility
● Range
○ Values
○ Skewness
● Timeliness
○ Reliable
○ Available
11. Why is this important?
What goes on under the hood isn’t visible.
Control the input to inform the output.
#TAKEAWAY
12. Uncomfortable truth
VBB is not guaranteed to work for you.
Not necessarily a plug and play success story.
Success or failure of VBB contingent on YOUR
choices, not Google’s model.
#TAKEAWAY
13. Pre engagement analysis output
Variability in values offers more
opportunity to find efficiencies.
#TAKEAWAY
14. Pre engagement analysis output
Volatility in values is a risk.
Aim for a consistent weekly
promotion cadence.
#TAKEAWAY
18. Pre engagement analysis output
● Check distributions
○ Where is the value?
○ Where will VBB optimise?
○ What signal are you sending to VBB?
○ Does this align with strategic goals?
#TAKEAWAY
30. Value Based Bidding for campaign optimisation
…optimised toward individual
customer business outcomes
and transactions
31. Value Based Bidding for campaign optimisation
…optimised toward individual
customer business outcomes
and transactions
What are they?
How do I get and use this data?
32. Value Based Bidding for campaign optimisation
CPC
CPA
ROAS
Cost of sale
Transaction value
Profit
LTV
33. Google Marketing Platform
Value Based Bidding for campaign optimisation
CPC
CPA
ROAS
Cost of sale
Transaction value
Profit
LTV
38. Ad platform alignment
● Not all accounts are primed for VBB
● Not all business strategies go well with VBB
● Google Ads strategy != Business strategy
● Margins are pretty difficult to obtain
● VBB is a process and not a one stop tech exercise
#LEARNINGS
39. Ad platform alignment
● Get to know all of the system features (1):
○ Enhanced conversions and consent
○ OCI (Enhanced Conversions for Leads)
○ Conversion Adjustments, Seasonal adjustments, Data
exclusions
○ Cart data reporting
○ Phoebe, Soteria, LTV modelling
○ Feeds (Page, customer, GMC, dynamic)
#LEARNINGS
40. Ad platform alignment
● Get to know all of the account structure options:
○ Shared budget
○ Portfolio strategies (tROAS with max CPC)
○ Ad group ROAS levers
■ Increase ROAS to reduce spend
■ Decrease ROAS to increase spend
○ The new match type considerations - full broad
○ Multiplacement black boxes (pMax, DemandGen)
#LEARNINGS
42. Level 1 - mapping margin to ROAS
Plain and simple the business
needs and wants more
● Determine how (that’s on you)
● Assortment (availability, stock
state, competitiveness (price,
delivery, payment …) )
The how - is not that simple and
requires meetings and a lot of QA
● Value distribution (rev. vs profit )
● Vendor deals and intricacies
● Account restructure
● Find opportunity
43. Lorem Ipsum
Lorem Ipsum
Lorem Ipsum
Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum
Account
Category Y
Category X
N Ad Groups
ROAS X
N Ad Groups
ROAS X
DSA
N Ad Group
ROAS Y
N Ad Group
ROAS Y
DSA
Ad Account structure based on search logic (Category - budget holder, Ad group - context holder, DSA - ideas)
44. Determine focus areas - vendor, category, product based on the retailer ability to perform!
Winner(s)!
45. ABC - Understand the assortment distribution - value, count and segment as much as you can to understand value.
46. ABC - Understand the assortment distribution - value, count and segment as much as you can to understand value.
47. Determine focus areas - vendor, category, product based on the retailer ability to perform!
Stages:
0 Old approach (CPC control)
1 ROAS Mapping / tROAS SBS
2 Corona (crazy expansion)
3 Continuation (+ new products)
Stage 0 Stage 1
Stage 2
Stage N
Revenue - Q1
2018 - Q2 2022
49. Level 2 - POAS (sGTM, Soteria, I hate Firestore)
Plain and simple the business
needs to understand
● Technical aspect is solved
(high degree of automation
(400k skus))
● Assortment (availability, stock
state, competitiveness (price,
delivery, payment …) )
POAS allows you to understand the
‘profitability’ with a twist
● What if profit happens at the
EoY
● What if you are positioning
● What if POAS is 1
● High profitability could mean
lower volume
● What if dynamic pricing
50. Always put the profit value conversion as secondary!!! (As the initial point)
51. What to do now - shift budget? Ask additional budget? As always it depends!
Ad Group Cost (EUR) Value (EUR) Profit (EUR) POAS
Vendor 1 2 800 168 000 20 000 714%
Vendor 2 1 050 75 000 18 000 1714%
CLient says I understand Vendor 2 efforts bring more yet I have a full
store of Vendor 1 …
52. What if? - shift budget? Remove group? - As always it depends!
Ad Group Cost (EUR) Value (EUR) Profit (EUR) POAS
Vendor 1 1 000 10 000 1 000 100%
Usually stay with tROAS but be mindful of the minimum meaningful
ROAS or in some cases revert to max conv value or tCPA (or some
other channel?)!
54. Level 3 - leads and predictions (EC4L, Conv. adj.)
Plain and simple the business has
issues with lead / conv. quality
● Technical aspect is solved
(high degree of skill (OCI,
eLTV, CRM)
● Everything on the call center
side is superb:)
VBB relies on importing the QL /
prediction as fast as possible (3-7d)
● How accurate are we in this
short span
● Sacrifice accuracy for efficiency
● Spot the outliers (definite -)
● Validate subsequently
55. Period 0 - pMax and Search are cool. Cannot attribute QL to Lead.
56. P1 - pMax sucks. After quality can be attributed to lead.
57. P2 - pMax almost gone. Discovery > DemandGen made a mess though.
59. Leads are not easy
● Time to upload is limited (3-7d)
● OCI lowers the conversion count (can VBB work with
that?)
● Make sure you validate the model quality (continuously)
● In some industries this kind of data upload is perhaps the
only option to make the algo smarter as the audiences are
not allowed
#LEARNINGS
60. Level 3b - leads and predictions (EC4L, Conv. adj.)
Plain and simple the business has
issues with lead / conv. quality
● Technical aspect is solved
(high degree of skill (OCI,
eLTV, CRM)
● Everything on the product side
is superb:)
VBB relies on importing the QL /
prediction as fast as possible (3-7d)
● How accurate are we in this
short span
● Sacrifice accuracy for efficiency
● Spot the outliers (definite -)
● Validate subsequently
61. The reward system - which needs to happen - inform on success.
Conversion Value
1080.41
936.29
727.12
517.78
296.85
175.32
152.90
77.06
67.70
50.02
44.97
38.00
34.15
30.45
19.96
8.75
8.59
-15.76
-21.41
-65.25
Conversion Value Bucket
1080.41 Highly positive (>2x
above avg CPA)
300 EUR
936.29
727.12
517.78
296.85 Positive (above avg
CPA)
150 EUR
175.32
152.90
77.06 Negative (bellow avg
CPA)
0 or?
67.70
50.02
44.97
38.00
34.15
30.45
19.96
8.75
8.59
-15.76 Highly Negative
0 or?
-21.41
-65.25
Do not upload!
63. ● What happens when you give the same
value? (Or no value?)
● What is the range from top to bottom?
● Distribution shape - width and height?
● Can you use negative values?
● Does it matter to use only integers, no floats?
Case study follow up questions
64. ● High cardinality data - is it better to normalise to a
discrete range?
○ Speed of model curation and update
● What are the effects of sparse values? Is imputation a
solution?
○ Synthetic data for data gaps?
○ What if this introduces inadvertent bias?
○ Can you use deliberate bias to direct optimisation?
● Is a curated VBB payload better than sending raw data?
Case study follow up questions
65. Case study follow up questions
● Can you feed the VBB a contrived/weighted
value to influence the model?
● If a user scores 4 (low ish - floating voter) and
you add a weight to floating voters, do they
convert better and move into the high propensity
group?
70. Proof
● Measurement against control group baseline
● Does the control group performance change?
○ Give it time - >6 weeks
● Compared to those given actual values?
○ Non converters vs ‘anyway converters’ vs
converters due to optimisation
● Net new customers
● Existing customer growth
71. Change Management
● Transition from CPA to tROAS
○ Revenue to profit to CLV to propensity
■ Short term drop for long term gain?
● What happens to the performance?
● How do you manage change?
○ Test and learn
○ Campaign planning
73. Takeaways
● Privacy first
● Data quality
○ Value variability is good
○ Volatility can kill you
● Data choices
○ Pre engagement analysis
● Campaign planning
● Account configuration
74. Takeaways
● Incrementality
○ Existing customer growth
○ Net new customer acquisition
● Change management
○ Expect initial downturn
○ Plan VBB deployment
■ Promotions
■ Affiliate relationships
75. Takeaways
● Success is not guaranteed
● Choose success metrics
○ Short or long term?
○ What exactly does profit mean?
○ Campaign ROAS?
○ Net new customers?
○ 0% ROAS is not bad
■ Market position
■ New vertical established
Why is this different to other VBB presentations? We’re going deeper than you’ve seen previously. We’re asking questions not even Google has the answer to.
Let’s showcase VBB
How Google sees the values mapped to efficiency - this is not a critique of Google’s intro to VBB - it’s a necessary and useful step. But don’t let this be the totality of your VBB understanding.
This is the top level Think With Google explanation - the enquiring mind will ask how this happens in specific contexts.
What about commercial pressures.
Market trends.
Affiliate influences.
Partnership demands.
Strategic goals.
Merchandising.
And WHAT EXACTLY IS Value?
Get the basics out of the way. Embrace the reality of modern analytics. This is a foundation that is unavoidable. Why would you want to avoid this part? Demonstrate excellence in the basics.
In terms of data quality, the better the signal you can send to Google, the better. A strong signal returns reliable and repeatable output. Output you can understand and trust.
Volume - meet and exceed the minima. 15 conversions in the last 30 days
Completeness - no point having huge gaps in your week. That’ll affect the choice of metric. Online proxy actions happen more frequently and reliably than offline lead conversion for example.
Consistent data you can rely on is preferable for obvious reasons.
We’ll come on to how range affects VBB performance but you will certainly include this in your quality scoring.
And timeliness is deffo a favourite. Prefer near real time to week lags in sales data.
Scoring for data quality requires a level of governance.
You can compare the performance of two data sets form the same source but with different levels of quality - values missing, outliers unaddressed and so on and you will see differing returns and performance.
Ideally, you will have data quality automated as a mature setup
See more in the architecture section. Record the values being sent to VBB, report and align on the output and performance. This is essential instrumentation for your test and learn framework. Establish data quality as a performance metric - maybe even your North Star if nascent/emerging?
Time for home truths.
Yes, you can set it and forget it but that isn’t going to be 100% certain of success.
Put in the hard yards and you’ll make a difference. Fully understand the ask and you’ll be more likely to see the returns you’d expect.
Let’s start with data quality
Variability is a signal of what’s good and what’s bad as far as the machine is concerned. That means there’s more scope to apply bid adjustments.
If a linear bid is adjusted up and down according to the value distribution, the arbitrage effect will be more apparent. But is this always what you want?
Variability is a signal of what’s good and what’s bad as far as the machine is concerned. That means there’s more scope to apply bid adjustments.
If a linear bid is adjusted up and down according to the value distribution, the arbitrage effect will be more apparent. But is this always what you want?
Consider these illustrative distributions. Where is VBB going to optimise for? What’s signalled as ‘good’?
See where we flag ‘value’? That’s where VBB is going to optimise bids. Away from low value to high value. Less of what you don’t want, more of what you do. Automagically!
But what about the other, less clear distributions. Less clear signal.
Imagine a subscription business selling one product for $x - no variability in value makes good/bad prospects hard to identify. Maybe quantity rather than price is a better choice and has more variability?
In scoring the data, ask these questions.
But this is a tricky scenario. Seldom do we have such clean distributions. Something messier, and noisier is more likely. So where is VBB going to go here? We need to be cautious so as not to ruin existing value.
If we’ve got a winning formula here already, we need to be careful not to undo this good work.
Protect this.
But it’s not that simple….
We have data that shows low LTV customers are more likely to buy once…..but we don’t retain their custom.
We’ve got solid value, at a retention rate that works.
This predicted behaviour is in the wrong place on our LTV scale though
Needs to move over there….
Can we get VBB to ignore that signal and focus on what is strategically useful for us?
Hide the data. Don’t score it.
We need to tell VBB ONLYO what it needs to know. Leave the high LTV campaigns alone.
Let’s showcase VBB
Notice the language here
Individual customer business outcomes.
Right.
But exactly what are we talking about? And how do we get and use these values?
Examples and where we might get them - consider technicalities of data pipelines, data timeliness, governance, customer consent
Consider how we integrate across GMP for data accessibility and activation
Real time values are available as the transaction happens on the site and in the app
CRM integration may require more technical work. The higher utility of the data may come at a cost in terms of investment required
CRM integration may require more technical work. The higher utility of the data may come at a cost in terms of investment required
A lot deeper. There’s quite a bit to unpack here
In scoring the data, ask these questions.
In scoring the data, ask these questions.
In scoring the data, ask these questions.
In scoring the data, ask these questions.
Search demand is not the only factor when deciding on what to do next.
In scoring the data, ask these questions.
In scoring the data, ask these questions.
In scoring the data, ask these questions.
In scoring the data, ask these questions.
Before you try these tactics, make sure you have tested the approach on your campaigns, with your data, and your customers. Results DO vary.
As well as considering the range, the values, the data types - it’s very context dependant.
These are thoughts, questions, unanswered brain farts. I don’t expect to provide answers yet. One day. At the bar?
For sparse data - 30-50 conversion per month is normally needed. Is it okay to impute values? Do we leave data sparse and let the Google AI figure it out?
So quality is actually a huge topic in it’s own right. Availability and signal strength are more easily quantified but the testing aspect cannot be avoided.
So quality is actually a huge topic in it’s own right. Availability and signal strength are more easily quantified but the testing aspect cannot be avoided.
Let’s put it together. Some code!?
Google projects Phoebe and Soteria follow these architecture patterns - choosing either Firestore or VertexAI as the data enrichment sources,
The Firestore API in sGTM is well known - a Firestore variable is straightforward to set up.
The VertexAI integration is achieved via an HTTP request - like talking to a Cloud Function perhaps?
In these documented VBB architectures the feedback loop does not exist.
The model values being fed to GMP come from an existing predictive model.
GMP uses these to refine bidding. Were the predictions any good? Did the user buy? Feed the predictions and outcomes back into the model for retraining.
VBB performance is dependant on the predictions being fed to it.
Then we compare performance over time. Typically over 6 weeks to be able to train the model.
What comparison do we need to perform to establish incrementality.
Consider testing different control group parameters.
Then we compare performance over time. Typically over 6 weeks to be able to train the model.
What comparison do we need to perform to establish incrementality.
Consider testing different control group parameters.
Then we compare performance over time. Typically over 6 weeks to be able to train the model.
What comparison do we need to perform to establish incrementality.
Consider testing different control group parameters.
Some examples of values we might consider in building VBB for non-retail
See how the utility might vary but the timeliness, and variability will be satisfied.