Our recent post on the “Myth of Market Research’s Failure” attracted a lot of readers, and recently a stern rebuttal (see comments at bottom of the post) from Philip Graves who is the author of a book called “Consumer.ology” (not to be confused with Martin Lindstrom’s similarly titled tome, “buy.ology”). Sub-titled “The Market Research Myth, The Truth about Consumers and the Psychology of Shopping” according to an unnamed reviewer on Mr Graves’ website the book “will send a shiver down the spine of the research industry”. Given that, it is not surprising he did not like our post!
Published last September, I confess I have not yet read his book, but it has got a “top 10” spot in the UK business books list on Amazon, and Philip seems to have been interviewed extensively including by magazines like “Wired” and even “Research Live”. It is then seemingly indicative of the trend towards wholesale lambasting of market research methods that we discussed in our post. The core argument as far as we can see is that surveys address the wrong side of the brain (i.e. miss emotive/intuitive response) and our qualitative methods are subject to bias and group think.
The great thing about this kind of attack is that it draws attention to the need for research to be more serious about providing better quality information to clients. The downside is that they tend to work by creating a “straw man” version of market research, in which out-dated and poor practices are treated as the norm and used as an excuse to belittle the whole industry.
Our view, stated simply, is that it is often perfectly possible to design fairly conventional research that can measure a good deal of underlying emotional or subconscious response and that where that is not possible with traditional methods there are a number of ways many of the “new MR” methods that are emerging can be applied to fill in the gaps. Clients or agencies who are dissatisfied with the quality delivered by their research deliverables should consider what can be done to improve them, rather than giving up on market research entirely. (As a disclaimer, we should mention that Gordon & McCallum provides clients with review and R&D support to improve MR systems and outputs – so we would believe that!)
Let us then, return to the specifics of Philip’s criticism of the “Myth” post. We are not defending all market research in all circumstances. Instead we are pleading for more effort in areas like research design, interpretive thinking and training rather than simply focusing on a search for superficially attractive “silver bullet” solutions. In this context the “70%” success figure applies to the helpfulness of “conventional” MR methods (e.g. surveys, focus groups) and does not imply that MR would fail 30% of the time, only that there are a proportion of marketing issues that are now better answered by newer MR techniques (e.g. those based on the emerging neuroscience, social media research, community panels and so on) or which are simply so complex or nebulous that (as Philip implies) a judgment call is going to be as helpful as doing research.
On the issue of the value of experience and seniority we do not argue against frameworks or systematic application of research practices (indeed helping clients get more ROI from research by applying such frameworks is what Gordon & McCallum specialises in). But research often tackles complex issues and no matter what systems you put in place, it is improved if you add some time for thought about design and interpretation and involve people with relevant experience. A core reason why research projects sometimes fail is that the role of time and brainpower is not fully appreciated. When we do research reviews, one of the big things we to look at is the development of frameworks and processes so senior researcher time can be redeployed from “mundane” tasks to areas like interpretation and research design. Few things add more value to research outputs, and in our view the fact that adjusting the research process can make such a significant impact on client satisfaction is in itself evidence that our problems are more about how we do things, less about what we do.
It is simply untrue that the fact that experience is helpful means that market research is unsystematic. The analogy here is with doctors – all doctors (hopefully) are well trained and have diagnostic and interpretive frameworks to work with. However, a 15-minute consultation with a junior general practitioner is not going to yield the level of treatment advice that a longer visit to an experienced specialist will provide. That does not mean, however, that you should rely on self-diagnosis rather than seeing the junior doctor when you are sick!
As I mentioned, I have not yet read “Consumer.ology”, and I’m willing to believe it may avoid the issues noted in our post. However, we would note that “proof by example” is a poor method of debunking market research. MR is a multi-billion dollar industry consisting of a wide range of techniques and skill sets. In its current form it goes back to the 1920’s and examples of it’s success abound in journals (e.g. the Journal of Marketing Research) whilst every year in forums like ESOMAR conferences, clients voluntarily get up and present case studies of how MR has helped them. Even in the area of prediction (where most criticism of MR focuses), companies like Novaction or BASES have put extensive effort into validation and testing of their services. The scale of MR means it is relatively easy to find examples of poor research, but this is a tiny percentage of the total – critics of our practices need to show they have scoured the journals and can refute the far more numerous examples of success and validation.
This does not mean that market research cannot be improved or made more cost-effective. But this is primarily a matter of applying new methods creatively or designing questions and qualitative research with more care. As a simple example, Philip’s point about “group-think” in focus groups identifies a real problem, but it is an ancient and well known one and avoiding it is frankly a “qualitative research 101″ issue. In most cases it results from poor moderation, lack of the right exercises and techniques or superficial analysis. Understanding of consumer emotions is another area where MR is often criticized, but here too there are a plethora of techniques and many exciting new developments that mean that the superficial responses derived from badly designed survey questions are perfectly avoidable. (On this one, Alastair is currently writing a chapter on developments in emotional research, in a book called “Leading Edge Marketing Research” to be published by Sage next year – it will contain more of the references Philip requests).
Finally, we reject the false dichotomy between the role of a client’s judgment and the utilisation of market research. Good marketing is part art, part science and market research is only one of the inputs required in decision-making. But well designed research has proved to be hugely valuable to numerous clients and it is doubtful that companies like Proctor & Gamble or Unilever would be the size they are without their strong commitment to the systematic utilisation of market research. To reiterate: rather than rejecting the application of market research altogether, clients would be better advised to focus on improving how they carry out research. In many cases an independent review to ensure their research systems and practices are up-to-date and efficient would save huge amounts of money and solve most of the problems Philip mentions.
So yes, market research cannot solve all problems or predict every possibility – but it does a lot better than Philip implies in his response. While we can (and should) improve in many areas, the starting point for such improvements is thought and planning, not a simplistic rejection of the whole industry.