Peter Swan
A passer-by happens upon a drunk searching for a lost wallet under a streetlight. With nothing in plain sight, the passer-by asks, “Where did you drop your wallet?”
“Over there,” gestures the drunk across the street, “but I’m looking here because this is where the light is.”
We often look for answers in the easiest place and not necessarily where the answer is to be found. As marketing moves from subjective art toward objective, data-driven science, are we seeing the emergence of a streetlight effect? Are even the very best big-data driven practices guilty of asking the wrong questions of the wrong data?
Most companies turn to analytics when early growth starts to slow. The familiar refrain, “Let’s make better use of our existing data”, heralds the onset of maturity, when the halcyon early days of triple and double-digit growth are well and truly past.
Initial questions asked of big data are typically, “Who are our best customers?” and “Which products are most profitable?” It soon becomes clear that performance differs by region, season and a host of other factors. So, it’s not long before we want to know, “How do quarterly sales in region A compare with region B, on products X, Y, and Z?”
Next comes propensity to respond (PTR) modelling, used to classify prospects for acquisition, cross sell, churn, or fraud. Where they exist, single customer views enable an entire family of PTR models used to determine next best actions.
Competing marketing priorities soon warrant marketing mix modelling (MMM), to estimate the marginal product of advertising spends across different channels. Next, MMM naturally leads to attribution modelling, to estimate how each channel contributes to the final sale.
The current holy grail of big-data driven marketing is to offer in real time the most likely product, at the most likely price, to the most likely customer, at the most likely time, via the most likely channel.
But, does big data and analysis make sense in the first place? Like the drunk under the streetlight, have we been seduced into looking for the answers where it was easiest. Namely, in the data we gathered from past sales to previous customers. But is this relevant for understanding future sales to future customers?
Nothing in the customer data gathered, or in the way it is presently being analysed, addresses the fundamental consumer desire to find the best available combination of price and product with the lowest search cost. All that segmenting and clustering and PTR scoring leaves our future consumers cold, stranded, outnumbered – feeling besieged and beset upon.
Consumers are boundedly rational humans optimised over generations for “fight or flight” and not for solving the n-dimensional optimisation problem that is rational consumer choice.
Tasked with buying a car, my siblings, with common genetic and environmental influences, will likely arrive at different consumption choices to mine. If those closest to me exhibit different preferences, then why are these “previous customer” strangers with no overlapping nature or nurture to me being used to suggest products for me?
Why model the choices of thousands of people I don’t know, and who don’t know me, in an effort to suggest products to me?
No consumer identifies with the clusters or segments thrown up by maximum likelihood (ML) models. All this ML exertion belies the constant state of flux wrought by Adam Smith’s invisible hand, and writ large in every single consumption choice.
It is a complex and rapidly changing world we inhabit with little known by these analytical models about my current preferences and circumstances.
The circumstances of markets, like those of individuals, can change in an instant. Products sell out, forcing consumers to choose from what’s available or to wait. Products stagnate. Promotions and discounts alter the relative attractiveness of one product compared with another, stimulating sales of one and depressing sales of another.
Individual finances wax and wane as personal circumstances alter. Each and every purchase decision is a moveable feast. Even simple choices become rapidly complicated. Little wonder consumers throw their hands up and head for the safe harbour of brand, or convenience, or availability.
The data we should be analysing − small data − is the time varying vector of product attributes and prices. This is the data consumers – your customers and your competitors’ customers – are using when choosing.
To the extent of their ability, each consumer is assessing, comparing and evaluating the products and services on offer as bundles of attributes with their corresponding “shadow prices”. Trading this attribute off against that, trying to identify the best combination of attributes with their shadow prices to suit oneself, taking into account one’s own dynamically altering preferences over the attributes and one’s own changeable circumstances.
What you should be doing is maximising the net utility of your potential customers, depending on the attributes of your product.
Analysing customer data to minimise the error of estimation, isn’t helping your customers to solve their problems – it is proliferating them. The manifold combinations and permutations are adding to the burden, not lightening the load.
Customers will pay you with their custom, for simply reducing their search costs. Faced as they are with overwhelming choice, customers want up-to-date, reliable, valid and trustworthy recommendations that embody their own personal preferences and budgets that are instantly available.
Peter Swan is a professor of finance at the Australian School of Business and co-inventor, with data scientist Stuart Dennon, of Choice Engine.