The Myth of Market Research’s Failure.

Head in Hands

Time To Get Over It - It's NOT that Bad! (Image by Alex E. Proimos via Flickr)

I am getting increasingly angry about the number of posts, books, You Tube Videos and articles – often by market researchers themselves – that imply “conventional Market Research” is a failure.

Here’s a good example, ‘futurist’ Patrick Dixon talking about why market research is “often wrong”: http://tinyurl.com/25kp34z .

These sorts of pronouncements tend to have several things in common:

  • Flashy style and grand pronouncements rather than reasoned argument,
  • Reliance on anecdote or case study (in Dixon’s case it’s his mother),
  • Lack of examples on the other side of the argument (when MR got it right),
  • A (false) assumption that the raison d’etre of MR is predicting “big” changes,
  • Failure to acknowledge that methods other than MR are not all that flash at predicting big changes or seismic shifts in behaviour either,
  • An assertion that “traditional MR” misses out on some extraordinarily key factor in understanding consumers, be it an inability to capture emotion, or failure to understand the role of Social Media or whatever uber-trend the author is fascinated by.

Let me counter this hyperbolic dismissal of the value of our traditional approaches with an equally strong counter claim. I strongly believe that good experienced, senior researchers can – in most markets – answer 70% of the key marketing questions of 70% of major research clients by means of a research programme consisting of not more than a few focus groups, a reasonable sized survey and access to some sales, retail or media trend data. There is an “if” of course – and this is sometimes a big if – they need to allocate enough time and thought to carefully design the study and analyse the results. This does not mean I am not a believer in many of the new MR methods, particularly some of the new neuroscience, customer panel and online qualitative approaches — let us ‘seniors’ incorporate some of those into the research programme and my success estimate goes up to 80 or 90%!   The core point I want to make though is that any systemic “failure” of market research is a failure to apply brainpower and thinking time – not primarily a failure of techniques.

Yes, there are increasingly “better” ways to directly measure emotional response than a traditional survey, but it is simply untrue to say that you cannot get any emotional feedback from a  survey. I’ve done it – on topics ranging from instant coffee to  prostitution and every level of intensity in-between. Similarly while online qualitative, MROC’s and social media analytics can produce great feedback, the overlap with information that could be obtained from a conventional FGD is often a lot greater than those selling the new methods are willing to acknowledge.

So why do our analytics so often seem superficial if it is not primarily about method? Well here’s three examples of common faults that we keep seeing:

  • Designs and outputs that bear only a moderate relationship to the brief that was given. This is often put down as a failure of the client servicing executive, but the reality is it is often a failure of systems within the client and agency to ensure that briefings are structured properly, incorporated into design and that outputs are checked against them.
  • Questionnaires that only ask one or two fairly unimaginative questions on the key client objective, and 30 others on all sorts of other, often very peripheral, issues.
  • Focus groups where the moderator talks too much, over-acts and no-one actually analyses the transcripts or videos later – claiming the client will get all they need from a “topline” or a post group “debriefing”.

“Mundane” faults indeed, but it’s our failure to address such trivial problems that’s really at the heart of our  industry’s seeming malaise . Often, when we talk to researchers about these issues, the fault is put down to clients – they “won’t spend the budget for analysis”, “their RFP was very rigid” or “they needed the answer tomorrow”. This is, of course, partly true — if clients pay peanuts  they tend to attract servicing and outputs designed for monkeys. But usually, when you dig, you also discover issues of marketing, people and systems (added value not marketed well, senior people not spending any time on research, brainpower not valued or priced correctly, poor allocation of executive time etc. etc.).

Too often, as an industry, we seem very ready to accept the hyped pronouncements that we simply cannot measure “emotion”, or “value” or be “predictive” and hence the need to abandon all our old ways and move holus-bolus to new methods. (Nigel Hollis’ recent post on why conventional research can – usually – measure and predict value offers useful light in this context: http://tinyurl.com/2fggstp).  Since, for most agencies, changing business models over-night is not on the cards, this simply engenders a sense of frustration and failure that is, in my view, unjustified.  It also seems to divert research management’s attention away from the most important need: working out frameworks and processes to get better at design, analysis and reporting. The tragedy is that, in many companies we’ve observed, a few months spent addressing a number of relatively minor faults in existing approaches could yield huge dividends in improving what is delivered to clients.

Let me repeat – I’m a huge fan of the new MR methods that are coming on-stream at the moment. But I’m also convinced that the big competitive advantage for most market research companies lies in addressing some quite basic faults in research practices. Faults in how we deploy two of our key resources:  brains and time. Companies that do that will be ready to embrace and integrate new technologies and methods properly, and take full advantage of their capabilities. Those that simply buy into the “doom and gloom” forecasts about the inherent failure of current research, will quickly find that the new techniques they rush to adopt will still fail to answer client’s questions because they have not been backed up by the necessary design, implementation and delivery standards.

If we have a failure as an industry – and overall I think we do a lot, lot better than we give ourselves credit for – it’s more about  a failure to consistently provide quality in whatever we do, rather than primarily being about the kind of work we do. More a failure of research culture than research methods perhaps?

About these ads

9 Responses to The Myth of Market Research’s Failure.

  1. Frank Martin says:

    Agree – many of the people who jump on this bandwagon have hired researchers that were the lowest competitive bid – or who were looking to get into the business with a modicum of practical experience. They get what they deserve. They conclude, because the research *failed*, that ALL research is bad, rather than face the truth that they made a poor hiring decision.

    • Alastair Gordon says:

      True Frank, there is an increasing issue with clients treating MR as a “commodity” purchase – although, part of the problem is that MR agencies — even some of the biggest — have some very odd pricing policies and systems in place. Overall we are very bad at pricing for value and the net result is we under-price work where we add real value and then over-price the simple stuff to compensate. This is something my business partner, David McCallum , does a lot of work with agencies on and as he notes the kind of mixed price signals we often give clients is a big part of our problem as an industry.

  2. Sasha says:

    We live in a semi-professional world. There is the greed on the client’s side preventing CEOs from hiring expensive professional MR managers capable of understanding added value or limitations of research findings. They hire those who can make results look like professional, because they often don’t understand what it all means, because they don’t understand it so they don’t really know how to judge it.

    These semi-professional MR managers often pick the lowest bid because they don’t have enough background to recognize true or false promises, advantages and disadvantages value and cost-to-value ratios. If you are in MR business for profits, you have to deal with that, you have to satisfy these people and use some pricing models that exploit these “features”.
    As a results, MR is commodizing now.

    I understand that, what happens and where we move. What I don’t get is why that process didn’t happen 30-50 years ago? What prevented us from commodization at that time? Higher costs as barriers to crappy approaches? Lengthy approahces limiting the bottom of how low a supplier can fall? I am not sure.

  3. Alan Ellerton says:

    I just wanted to add my support as another who has heard this argument far too many times during my time as a client researcher and as a a research supplier. I guess it’s inevitable that people will always think the latest development is “the” solution to the problems posed to us by our Marketing colleagues but it is the application of thinking time and experience that makes the real difference, whether we use tried and tested techniques or newly developed ones.

  4. Andy Morrison says:

    I have never understood this angst from those who claim that traditional MR doesn’t work or doesn’t have the attention of senior managers. Perhaps it is because the industry sectors and clients that my company works for generally do highly value what we deliver predominatnly using “tranditional” market research tools with a fair dose of new methods. We are “at the table” in many instances when decisions, strategies, and tactics are discussed. The clients, like all clients, want competitive budgets but do tend to still select the partner that can deliver the most value. What does concern me and my colleagues are those in the industry who are promoting their own new methods in the absence of good theory, scientific rigor, standards, and common sense.

  5. I’ve only recently happened across this post, but would like to add my (belated) contribution as someone who sits in the “myth” camp.

    What is interesting to me is that the support for market research offered here, that 70% of marketing questions can be answered by means of a mix of market research techniques backed by a senior researcher, doesn’t seem like a particularly strong argument. We could quibble over the semantics, but Patrick Dixon might say that three times out of every ten is relatively “often” for research not to answer marketers questions.

    In Dixon’s field (forecasting the future) I’m confident that the rate of error is considerably higher: most product launches don’t take hold and, given that market research is well known and well used, if it – or even one firm selling it – could demonstrably swing the likelihood of success, I suspect we would see many more successes than we do.

    Whilst Dixon uses his mother as an illustration, there is no shortage of legitimate science to illustrate why market research is, in many instances, likely to miss what matters. My book, Consumer.ology, has around 200 references, the majority to journal-published papers.

    Even if the 70% figure is achievable, which I doubt, the question is, do we know which 70% it is? In looking at the times market research gets it wrong, or just as importantly, doesn’t get it right, we have the opportunity to learn and be more circumspect about what we apply where. My conclusions from reviewing case studies and looking at what psychologists and neuroscientists have discovered in the last thirty years, is that we should very, very careful about the questions we ask consumers and the extent to which we believe the answers they give us (however senior we are).

    If we were to discover that senior researchers bring greater accuracy to the business of understanding consumers is this evidence that market research works or that it doesn’t? If it’s efficacy is contingent on the complex balance of experience obtained over many years, rather than the application of a clearly understood set of tools and techniques, doesn’t that mean market research, per se, doesn’t really work?

    Personally, I am at a loss as to how anyone aware of what psychology has demonstrated about group behaviour could think focus groups were a worthwhile research tool. Softening this technique’s many flaws by mixing it with other methods is no solution at all: the problem persists that whilst kernels of legitimate insight will almost certainly be present, distinguishing them from the artefacts of the research process is a haphazard business.

    I accept that this is the point at which a “senior” researcher would claim, possibly with justification, that they add value. However, as a client or someone advising clients on the application of research, I would always warn against relying on “it will be good because I say it will be” as the basis for deciding how much weight to attach to any particular data source.

    Perhaps the “angst” (as Andy describes it) of people like me who have looked at different aspects of the way consumers think and found market research lacking, is that we also understand why market research is used: after all businesses buying a research service are consumers too and we know that, in order to challenge the power of the comfort market research currently offers, we will have to shout to be heard. It took the Catholic church 200 years to accept that the Earth wasn’t the centre of the Universe… there’s nothing like strong, misplaced faith to uphold a misguided point of view. I appreciate that this is a harsh parallel to draw, but when the counter-argument to the demonstrable science of why asking people questions is frequently futile is “believe us because we know what we do works”, an objective review of the things past generations have believed that turned out to be wrong is worth considering.

    Clients want to make good decisions. I firmly believe that knowing that you have to make a judgment call without support from your target audience (because they are incapable of the accurate introspection or projection that would be required of them) provides a better basis for a decision than comforting but psychologically unreliable research.

    • Alastair Gordon says:

      Philip, Thanks for taking the time to respond so fully and thoughtfully. Obviously I disagree with you, but the debate on what MR can, or can not contribute to clients is an important one so it’s good to have a reasoned argument. Since your comment is so extensive, I’m going to respond to your many points in a separate post – up today sometime hopefully – which you will, of course, be welcome to respond to!

    • Alastair Gordon says:

      For those interested, I have replied in more depth to Philip here: http://wp.me/pBmtI-aA

  6. [...] recent post on the “Myth of Market Research’s Failure” attracted a lot of readers, and recently a stern rebuttal (see comments at bottom of the post) from [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,071 other followers

%d bloggers like this: