Wp Fundamentals Tracking


Published on

Published in: Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Wp Fundamentals Tracking

  1. 1. Fundamentals of Campaign Tracking EXECUTIVE SUMMARY Campaign tracking measures the effect of a campaign by reporting on the dif- ferences in customer or prospect status found in a database before and after a direct marketing effort. The most important considerations are: . Make the decision about what will be measured and where the data is coming from to support that measurement before, not after, the cam- paign. If you don’t design and set up tracking methodology in advance, you may have problems getting to the information after the fact. 2. Successful campaign tracking requires the ability to compare and match contacted prospects and customers to responders and buyers, some- thing that should be almost automatic in a marketing database that includes the appropriate data feeds. In a non-database environment, no matter how carefully you plan a campaign, sometimes all of the infor- mation needed to track is simply not available at the conclusion of the campaign. More often, the information changes on successive iterations of the customer file in ways that make tracking nearly impossible. 3. If the point of measuring a campaign is to determine whether or not to repeat it, make sure that the results are statistically projectable and repeatable. 4. Tracking has an optimum measurement window. To accurately measure gains which resulted from a campaign, you need a “snapshot” of the universe with all the corresponding selection criteria just before contact is made and one just after prospects and customers have responded. 5. Tracking windows often overlap. As a result, a single response or sale may be attributable to more than one campaign. Develop business rules that address this issue. 6. It is not possible to produce reliable and timely tracking without updat- ing the database at least monthly. Less frequent updates do not provide timely “before” and “after” snapshots of the database, and more frequent updates increase tracking accuracy. Richard N. Tooker FUNDAMENTALS OF CAMPAIGN TRACKING VP, Solutions Architect Campaign tracking is the comparison of prospect or customer status at two different points in time. For purposes of this discussion, the two points in time are immediately before and after a direct marketing effort. WHITE PAPER Fundamentals of Campaign Tracking
  2. 2. There are three steps which must take place for a campaign to be tracked: . Customers or prospects are selected from the database and some type of marketing effort or “treatment” is applied to them. 2. The database is allowed to “age,” giving the contacted customers and prospects time to react to the marketing effort. 3. The same individuals are then identified and compared, before and after the marketing effort. “The decisions about what will be Tracking measures the differences found in the database before and after a campaign. To accomplish that successfully, there are certain fundamentals measured and what data will be that must be considered. used for that measurement must be made before, not after, a campaign TRACKING CONSIDERATION #1: CAMPAIGN DESIGN takes place.” The decisions about what will be measured and what data will be used for that measurement must be made before, not after, a campaign takes place. If not, it is possible – even likely – that there will be problems in successfully measuring the effort at the conclusion of the campaign. Here’s an example: A direct mail campaign promoting a specific product with the objective of generating leads for a field sales force. The 80,000 target prospects selected for testing include individuals in the database who meet specific gender, age, income and home ownership criteria, and those individu- als in the database coded as likely to be responsive to mail order. Even though an individual prospect will only be present in the campaign once because of the way the selection is done, the criteria are not necessarily mutually exclusive. Some of the prospects who qualify to receive the mailing because they meet the demographic criteria also meet the criteria for mail order responsiveness, and vice-versa. Two different creative approaches known to be successful at generating leads are used in the campaign: • Package A: A letter and lift note in a #0 envelope, creatively focused on the benefits of owning the product. • Package B: A jumbo-sized postcard focused on the value pricing of the product. In both packages, the response options are to call an 800 number specific to the creative package used, or to go to one of two separate Web micro sites – one for each of the two packages – and register to receive more information about the product. Package A is mailed to 40,000 prospects, selected nth-name, and package B is mailed to the other 40,000. The prospects receiving the two packages are given two separate promotion codes for later tracking. Initially, the objectives were to efficiently generate leads for the product and to determine which creative package was best at doing that. The number Fundamentals of Campaign Tracking 2
  3. 3. of leads was to be compared to the cost of mailing each package to arrive at a cost per lead generated. No problem, since the campaign was coded and set up to do that in advance, and since the response mechanisms kept the two groups of responders separate. This is basic direct marketing, properly executed. However, after the fact, someone asks three questions: . Which of the selection criteria works best, the prospects who meet the demographic criteria or the mail order responders? 2. Which is the strongest copy platform, the “benefits” approach, or the “value pricing” approach? 3. Which format is best, the #0 envelope or the jumbo postcard? There can be no answers, given the way the campaign was structured. To answer any of the above questions requires strict adherence to a fundamental campaign-tracking premise: A variable must be isolated if it is to be measured Had the tracking objectives been properly defined before the mailing, the campaign could have been designed and executed so that answering any or all of those questions would have been possible. The campaign would have been set up to isolate each of the variables, as illustrated below: Package Formats / Copy Platforms CONTACTS #0 Envelope Postcard Benefits Value Benefits Value Demographic Select 0,000 0,000 0,000 0,000 Mail Order Buyer 0,000 0,000 0,000 0,000 In this campaign design, there are eight separate test cells, each coded so that “It is imperative that marketers think the prospects in them can be identified at the conclusion of the campaign on through what questions might be a successive generation of the database. The design would have answered all of those “after the fact” questions, as well as the original question. asked after the campaign so that the campaign tracking methodology It is imperative that marketers think through what questions might be asked can be set up correctly before the after the campaign so that the campaign tracking methodology can be set up correctly before the mailing. Always beginning with the end in mind requires mailing. “ discipline, but there really isn’t any way to overstate the importance of this exercise. CONSIDERATION #2: DATA QUALITY Successful campaign tracking requires the ability to compare and match contacted prospects and customers to responders and buyers, after the campaign is concluded. Sometimes the information needed to track campaigns changes, or is not available. If the before and after snapshots of the database are different in some way, it can create problems in accurate comparison. Fundamentals of Campaign Tracking 3
  4. 4. For example: • An individual may have been selected for inclusion in a campaign because he was in a specific segment defined by a predictive model. If the sales cycle is long, that individual may no longer be in the same seg- ment at the conclusion of the campaign because something in the data about him changed – a phenomenon known as “score drift.” Another manifestation of the same problem occurs when a model is recalibrated or rebuilt using new or different variables in the time period between the implementation of the campaign and its measurement, resulting in a redefinition of the segments. So, if the promotion to a specific segment was a winner, it might or might not be completely accurate to conclude that promoting to that segment will always produce the same result. This problem can be solved by a competent statistician, but it must be recog- nized and addressed. • Questionable data quality can significantly limit the utility of tracking in- formation. An example of this is relying on field-entered data from a sales force automation system to record lead disposition. At best, SFA data will only be about 75% accurate, and then only if the salespeople are actually employed by the company and keeping up with lead disposition is posi- tioned as an absolute requirement of the job. In most scenarios, SFA data “The solution to data problems is to should be considered half-right. attempt, wherever possible, to limit • Far and away, the most common circumstance is inconsistency in the selection criteria to only those data source data between the two snapshots. The information in a feed to the elements proven to be accurate and database used to select the person for a campaign may have been inaccu- rate or inaccurately interpreted, then corrected or redefined by IT during known to be consistent from update the time between selection and measurement. Sometimes accuracy is to update. “ not the problem, but the data changes in some way that makes it useless for tracking. Or, a data source used for selection may simply disappear during that time. The problem of data inconsistency occurs sooner or later in virtually all database marketing programs, and is a constant prob- lem in many of them. The solution to data problems is to attempt, wherever possible, to limit selec- tion criteria to only those data elements proven to be accurate and known to be consistent from update to update. That involves assigning knowledge- able “data cops” to the job of asking hard questions about the data before – not after – campaigns are designed and implemented. Constant vigilance can significantly minimize the problem, but it is rarely possible to completely eliminate it, especially in complex, high-velocity marketing environments. CONSIDERATION #3: STATISTICAL VALIDITY Measurement requires statistically valid sample sizes. Campaigns with measurement cells too small will not produce results that can be projected to a larger universe. Many direct marketing books include tables that can define how many con- tacts are required at various levels of response to produce a statistically valid result at various confidence levels. Here’s a “safe” short cut: if a cell contains enough marketing contacts to generate at least 00 actions (responses or sales), then the cell will likely produce a repeatable result. If you simply don’t know what the response rate is likely to be, make sure that each cell includes at least 0,000 contacts. Fundamentals of Campaign Tracking 4
  5. 5. Package Formats / Copy Platforms CONTACTS #0 Envelope Postcard Benefits Value Benefits Value Demographic Select 0,000 0,000 0,000 0,000 Mail Order Buyer 0,000 0,000 0,000 0,000 As an example, look again at the lead generation campaign illustrated above, which has eight measurement cells of 0,000 each. If 0,000 contacts in a measurement cell are going to produce 00 responses, the response rate has to be at least % (00 divided by 0,000). Assuming an expected “worst case” response rate of only 0.25% (¼ of %), cells of 0,000 contacts would only produce 25 responses (0,000 times .0025) – not nearly enough to be statisti- cally projectable. At 0.25% response rate, each cell would need to contain 40,000 contacts in order to generate 00 responses. The total campaign would therefore need to be 320,000 contacts rather the original 80,000. This can be a problem if the universe (or the budget) is not large enough to sup- port a promotion that is four times larger. There is a way to draw reasonable conclusions from the data in less-than- valid cells by aggregating them. Suppose, for example: • The “value pricing” copy platform was the clear winner in all four cells in which it appears, and collectively, those four cells produced re- sponses (vs. 34 for the ”benefits” platform), as illustrated below. It would be reasonable to conclude from that result that the value pricing platform should be used going forward. • The cells containing the demographic select total 94 responses ver- sus only 5 responses for the mail order buyers. Although you haven’t reached the needed 00 responses to be absolutely sure, the result strongly suggests that demographics are a more effective way to select prospects or customers in this instance. • Similarly, the #0 envelope package produced a total of 94 responses vs. 6 for the postcard. Again, although you haven’t reached the requisite 00 responses, the result strongly suggests that the #0 envelope package should be used. Package Formats / Copy Platforms RESPONSES #0 Envelope Postcard Benefits Value Benefits Value Demographic Select 7 42 8 27 Mail Order Buyer 6 29 3 3 If a test campaign produced the response pattern illustrated above, the right course of action on a rollout would be to use the value pricing copy platform, and if the mailing is large, most of the mail should use the #0 envelope pack- age format and the demographic select. Test cells should be added to re-test Fundamentals of Campaign Tracking 5
  6. 6. and validate the select and package format result with just enough contacts to provide a statistically projectable result. In some situations, there are cells of customers or prospects, selected in the same fashion as those who will be contacted, who are flagged as a non-pro- motion “control group” that does not get contacted, but their actions are tracked in the same way as those who do receive the contacts. The objective of this exercise is to determine the degree to which those non-contacted in- dividuals will also buy the product being promoted, without the “push” of any type of direct campaign, but are exposed to the company’s use of general ad- vertising such a branding TV spots. Comparing their actions to the individuals who did receive direct marketing promotion provides a measurement of the incremental effectiveness of the promotion. This is sometimes important, but such a measure has to be weighed against the lost opportunity cost of not selling a product when sales are needed, especially when statistical validity requires a large number of prospects or customers to be held out. CONSIDERATION #4: TRACKING WINDOWS Every campaign has an optimum measurement window. To accurately measure gains which resulted from a campaign, a “snapshot” of the file is needed just before individuals receive the promotion and one just after they have reacted to it. The dates on which the response and sales feeds are provided for use in updating the database provide these two critical snapshots of what the database looked like, before and after. Therefore, the measurement window must begin and end with specific feed dates. If it does not, the actual status of responders, non-responders and customers could change, sometimes significantly, and tracking would therefore measure what customers and prospects were assumed to be rather than what they really were. The optimum measurement window is determined by three factors: . The date of data feeds that provide information about responses and sales. 2. When the promotion reached the individuals selected to receive it. 3. What the normal “action period” is for promotion Let’s consider, for example, a February direct mail campaign. January end-of- month feeds are used in early February to update the marketing database. Somewhere around the middle of the month, selections for the February campaign are made from the database and sent to a lettershop to produce the mailing, and mail is processed and dropped toward the end of February – approximately a three-week turnaround. Standard mail delivery time by the Post Office is normally 8-5 days, so mail delivery and receipt takes place between the end of February and mid-March. Following this is the customer or prospect action period, which ranges from immediate response (-2 days) up to 45 days or even longer to consummate a sale. Therefore, for the February mailing, the action period should be from mid- March through the end of April to capture the most accurate “before” and Fundamentals of Campaign Tracking 6
  7. 7. “after” picture. The tracking window will have actually begun prior to receipt of the mail (using end-of-January data if updates are monthly), and some ac- tion will very likely take place after the tracking measurement window ends at the end of April. In this situation, the tracking window is as close to the optimum tracking window as possible. Even if feeds could be provided, the database updated, selections made, and the mail dropped all on the same day, there are still many other variables which cannot be strictly controlled. Mail will be delivered on different days to different prospects and customers, not every- one will read nor act upon a mailing the day it is received, and different products have different action windows to close the sale and get it on the company’s books. The window is as good as it can be – but it’s not perfect. CONSIDERATION #5: OVERLAPPING WINDOWS Consider the previous example of the February mailing, with the tracking measurement window from mid-March through April 30. It is now time to do a March mailing. Feeds are provided based on February end-of-month data, the marketing database is updated, mail is processed and dropped, and mailings reach the homes of prospects and customers sometime in the first two weeks of April. Again, the normal action period ranges from immediate response up to 45 days – in this case mid-April through the end of May. Fundamentals of Campaign Tracking 7
  8. 8. Note that there is approximately a two-week calendar overlap in the track- ing measurement windows—the period from mid-April to the end of the month – during which time customers and prospects might respond to either the February or March mailing. If some of the same individuals selected in February were also selected in March, the actions of those people would be counted twice – first in the February tracking report and again in the March tracking report. To the degree that individuals are selected for two successive monthly cam- paigns, there will be problems in attribution. Which campaign caused the response or sale? This is the “overlap” problem. It can neatly be sidestepped by implement- ing business rules that suppress recently selected customers or prospects to keep them from receiving another promotion within a specific period of time. Sometimes, however, there may be very good reasons to promote to the same customer or prospect in rapid succession. That’s all right, as long as the attribution problem is recognized and business rules put in place to attribute the responses or sales to a specific campaign. Another way to avoid the overlap problem would be to narrow each tracking window to cover a single month. In the example above, it would require limit- ing the February campaign-tracking window to cover only April, while the March mailing tracking window would cover only May. Even though on the surface this might appear to be a viable solution, it cre- ates an entirely new problem – the result would be to dramatically under- count rather than possibly overcount the actions taken by customers or prospects. Two weeks of the potential action period would be “cut” from the February campaign tracking window, and two weeks from the March cam- paign tracking window. Because a disproportionate amount of response typically occurs within the first two weeks of customer receipt, the risk of dramatically understating re- sults generated by a campaign is great. Results are never evenly distributed throughout the measurement window and the two weeks which would be cut typically contain the heaviest concentration of response. CONSIDERATION #6: DATA CURRENCY It is impossible to produce reliable tracking unless a marketing database is updated monthly or more frequently. Assuming that database updating is done quarterly, the February mailing described above would be produced from December data feeds. As a result, there would be no truly accurate “before” snapshot of just prior to the cam- paign. Prospects and members selected will have had more than a month to change their status between the time of the data feed and the time of the select. When this happens, results are measured on campaigns to prospects and members who might be, in fact, a little different than they appear to be. Similarly, there is no truly accurate “after” snapshot of the file. Individuals who respond in the tracking window ending in April will have an additional two months to change their status because the next feed that can be used for measurement will not be provided for database updates until the end of June. Fundamentals of Campaign Tracking 8
  9. 9. In this situation, the ideal measurement period for a February mailing would be from mid-March through the end of April, but with quarterly updates the only possible window would be from January through June 30. An addition- al problem with this scenario is that the results of the February mailing would not be known until sometime in July. WHAT DOES THIS ALL OF THIS MEAN TO YOU? The ability to track and measure the results of marketing efforts is usually considered critical to the justification of marketing costs. Depending on how campaigns are designed and implemented, conclusions drawn from cam- paign measurement must be viewed in light of these considerations. 1-866-4KNWLDG KnowledgeBase Marketing, www.knowledgebasemarketing.com © 2006 KnowledgeBase Marketing, All rights reserved. Fundamentals of Campaign Tracking 9