Kevin Hillstrom: MineThatData

Exploring How Customers Interact With Advertising, Products, Brands, and Channels, using Multichannel Forensics.

May 20, 2009

An Open Letter To E-Mail Marketers: Shopping Cart Abandoment E-Mail Campaigns

There appears to be some criticism about my view of shopping cart abandonment e-mail marketing programs.

So my fellow e-mail marketers, the vast majority of which act in an honest manner, marketing opt-in campaigns with integrity, let's consider the following:

Let's pretend that 100 customers abandon a shopping cart on Monday. On Tuesday, you send a targeted e-mail campaign, and you observe the following statistics:
  • 30 customers click-through the e-mail campaign, 50% of those individuals buy something, meaning that 15% of the customers purchased because of the campaign.
So these are good numbers, right?! I mean, who in their right mind would ever complain about an e-mail campaign that delivers a 15% response rate?

One of the challenges of e-mail marketing is that e-mail marketers like you and I are used to measuring "positives". We are driven to measure positive outcomes. Our metrics are calibrated to highlight anything we do that is good.

But what about the 85% that did not purchase? What if we angered 25 of the 85 customers, and they don't ever come back and buy from us again, because of our marketing program? Are we measuring this important KPI? Probably not ... because it is truly hard to measure negatives, isn't it?

There are three things we can to do prove that shopping cart abandonment e-mail campaigns are good for us, and good for the customer.
  1. Execute e-mail campaign mail/holdout groups. If 15 of 100 customers purchase in the shopping cart abandonment e-mail campaign, and 11 of 100 customers purchase in the holdout group, then we got an incremental 4 customers to purchase. 4 is still better than 0, right? But we do need to measure the incrementality of our marketing activities, don't we? We cannot take credit for orders that would have happened anyway.
  2. Follow the mail/holdout group for a year. See if, at the end of twelve months (or even three months), the group that received these type of marketing campaigns spent any additional money. If so, good, it means that as a whole, the campaigns are working. But what if the groups have equal performance, when measured over the long-term? If this happens, then we are simply shifting demand, we're not actually creating demand.
  3. Quickly identify customers who do not interact with these campaigns, and create a field in your database, so that we don't necessarily send these campaigns to that audience.
If any marketing campaign works, e-mail or otherwise, then we'll observe an improvement in at least one of the following metrics/KPIs:
  • An increase in the annual customer retention rate, maybe from 44% to say 47%.
  • An increase in the annual customer reactivation rate, maybe from 13% to say 15%.
  • An increase in orders per retained/reactivated customer, from 2.25 to 2.35 as an example, measured annually.
  • An increase in average order value, from $125 to maybe $132, measured annually.
  • An increase in new customers, measured on an annual basis.
  • An increase in customer profitability, measured on an annual basis.
As an e-mail marketing community, we need to demonstrate to others that shopping cart abandonment e-mail marketing programs increase one or all six of the metrics I just listed, while not angering other customers. Given the tools listed in this blog post, that's not hard to do, is it?

And guess what? The long-term testing is just as likely to prove that the value of this marketing program is more than what is illustrated by traditional metrics as it is likely to prove that the value is less. When you convert a customer to a purchase, their future value is significantly increased --- so the testing may show that this style of marketing is essential.

Let's have a balanced perspective ... marketing works positively for some, works negatively for others. The sum of the two can be measured via testing. This is what I'm advocating in the article --- summing the positive, negative, and incremental outcomes. To only focus on half of the metric set is misleading.

We can do this kind of testing!

Labels: , ,

June 03, 2008

Great Moments In Database Marketing #1: Incremental Value

Our top rated Database Marketing moment takes us back to 1993 - 1994. Yeah, way back then, people were doing sophisticated work. Honestly!

Way back in the early 1990s at Lands' End, we had seven different business units that marketed to customers, either through standalone catalogs, or though pages added to catalogs.

As growth became more and more difficult (pay close attention online marketers ... your world is heading in this direction), management elected to mail targeted catalogs to targeted customer segments.

In other words, a Mens Tailored catalog concept was developed, with a half-dozen or more incremental catalogs mailed to customers who preferred Mens Tailored merchandise. A Home catalog concept was developed, with nine or more incremental catalogs mailed to customers who preferred Home merchandise.

Seven concepts were developed. Each concept was growing.

But the core catalog, the monthly catalog mailed for three decades, was not really growing anymore. And total company profit (as a percentage of net sales) was generally decreasing over time.

Something was amiss.

We studied the housefile, and learned that the "best" customers were being "bombed" by catalogs ... upwards of forty a year. Every business unit, making independent mailing decisions, mailed essentially the same customers. And all of our metrics, when viewed at a corporate level, indicated that customers were not spending fundamentally more than they spent several years ago when the new business concepts didn't exist.

So we developed a test. We selected ten percent of our housefile, and created seven columns in a spreadsheet. We randomly populated each column with the words "YES" or "NO', at a 50% / 50% proportion. Each business unit was assigned to a column. When it came time to make mailing decisions for that business unit, we referred to the column assigned to the business unit. If the word "NO" appeared, we did not mail the customer (if the customer qualified for the mailing based on RFM or model score criteria).

In statistics, this is called a 2^7 Factorial Design.

There are two reasons for designing a test of this nature.
  1. Quantify the incremental value (sales and profit) that each business unit contributes to the total brand.
  2. Identify, across customers segments, the number of catalogs a customer should receive to optimize profitability.
What did we learn?
  1. Each catalog mailed to a customer drove less and less incremental increases in sales. If a dozen catalogs caused a customer to spend $100, then two dozen catalogs caused customers to spend $141, and three dozen catalogs caused customers to spend $173. The relationship roughly approximated the Square Root Rule you've read so much about on this blog.
  2. Each business unit, on average, was contributing only 70% of the volume that company reporting suggested the business unit was contributing. In other words, if you didn't mail the catalogs, you'd lose 70% of the sales, with customers spending 30% elsewhere.
The latter point is critical.

Take a look at the table below, one that illustrates the profit and loss statement reported by finance, and one that applies the results of the test.

Test Results Analysis
Finance From


Reported Test Results
Demand
$50,000,000 $35,000,000
Net Sales 82.0% $41,000,000 $28,700,000
Gross Margin 55.0% $22,550,000 $15,785,000
Less Marketing Cost
$9,000,000 $9,000,000
Less Pick/Pack/Ship 11.0% $4,510,000 $3,157,000
Variable Profit
$9,040,000 $3,628,000
Less Fixed Costs
$6,000,000 $6,000,000
Earnings Before Taxes
$3,040,000 ($2,372,000)
% Of Net Sales
7.4% -8.3%

The test indicated that what appeared to be highly profitable business units were actually marginally profitable, or in some cases, unprofitable. In this example, the business unit is "70% incremental", meaning that if the business unit did not exist, 70% of the sales volume would disappear, while 30% would be spent anyway by the customer, spent on other merchandise.

Imagine if you were the EVP responsible for a business unit that appeared to generate 7.4% pre-tax profit, only to have some rube in the database marketing department tell you that your efforts are actually draining the company of profit?


Why Does This Matter?

This style of old-school testing (which is more than a hundred years old, with elements of the testing strategy now employed aggressively in online marketing) tells you how valuable your marketing and merchandising initiatives truly are.

Catalogers fail to do this style of testing, not realizing that a portion of catalog driven sales would still be generated online (or in other catalogs). In 2008, most catalog marketers are grossly over-mailing existing buyers. Catalog Choice, in part, exists due to catalogers mis-reading this phenomenon.

E-mail marketers seldom execute these tests, not realizing that in many cases almost all of the sales would still be generated online. E-mail marketers, ask your e-mail marketing vendor to partner with you on test designs like the ones mentioned in this article. You may be surprised by what you learn!

Online marketers are more likely than most marketers to execute A/B splits at minimum, with some executing factorial designs. Many online brands evolve in a Darwinian style, fueled by the results of factorial designs. Online marketers know that you make mistakes quickly, and you correct those mistakes quickly.

Web Analytics folks have the responsibility to tell management when sku proliferation no longer contributes to increased sales. It is important for Web Analytics folks to lead the online marketing community, shutting off portions of the website in various tests to understand the incremental value of each additional sku.

What are your thoughts on this style of testing? What have you learned by executing tests of this nature?

Labels: , , , ,