I am getting increasingly angry about the number of posts, books, You Tube Videos and articles – often by market researchers themselves – that imply “conventional Market Research” is a failure.
Here’s a good example, ‘futurist’ Patrick Dixon talking about why market research is “often wrong”: http://tinyurl.com/25kp34z .
These sorts of pronouncements tend to have several things in common:
- Flashy style and grand pronouncements rather than reasoned argument,
- Reliance on anecdote or case study (in Dixon’s case it’s his mother),
- Lack of examples on the other side of the argument (when MR got it right),
- A (false) assumption that the raison d’etre of MR is predicting “big” changes,
- Failure to acknowledge that methods other than MR are not all that flash at predicting big changes or seismic shifts in behaviour either,
- An assertion that “traditional MR” misses out on some extraordinarily key factor in understanding consumers, be it an inability to capture emotion, or failure to understand the role of Social Media or whatever uber-trend the author is fascinated by.
Let me counter this hyperbolic dismissal of the value of our traditional approaches with an equally strong counter claim. I strongly believe that good experienced, senior researchers can – in most markets – answer 70% of the key marketing questions of 70% of major research clients by means of a research programme consisting of not more than a few focus groups, a reasonable sized survey and access to some sales, retail or media trend data. There is an “if” of course – and this is sometimes a big if – they need to allocate enough time and thought to carefully design the study and analyse the results. This does not mean I am not a believer in many of the new MR methods, particularly some of the new neuroscience, customer panel and online qualitative approaches — let us ‘seniors’ incorporate some of those into the research programme and my success estimate goes up to 80 or 90%! The core point I want to make though is that any systemic “failure” of market research is a failure to apply brainpower and thinking time – not primarily a failure of techniques.
Yes, there are increasingly “better” ways to directly measure emotional response than a traditional survey, but it is simply untrue to say that you cannot get any emotional feedback from a survey. I’ve done it – on topics ranging from instant coffee to prostitution and every level of intensity in-between. Similarly while online qualitative, MROC’s and social media analytics can produce great feedback, the overlap with information that could be obtained from a conventional FGD is often a lot greater than those selling the new methods are willing to acknowledge.
So why do our analytics so often seem superficial if it is not primarily about method? Well here’s three examples of common faults that we keep seeing:
- Designs and outputs that bear only a moderate relationship to the brief that was given. This is often put down as a failure of the client servicing executive, but the reality is it is often a failure of systems within the client and agency to ensure that briefings are structured properly, incorporated into design and that outputs are checked against them.
- Questionnaires that only ask one or two fairly unimaginative questions on the key client objective, and 30 others on all sorts of other, often very peripheral, issues.
- Focus groups where the moderator talks too much, over-acts and no-one actually analyses the transcripts or videos later – claiming the client will get all they need from a “topline” or a post group “debriefing”.
“Mundane” faults indeed, but it’s our failure to address such trivial problems that’s really at the heart of our industry’s seeming malaise . Often, when we talk to researchers about these issues, the fault is put down to clients – they “won’t spend the budget for analysis”, “their RFP was very rigid” or “they needed the answer tomorrow”. This is, of course, partly true — if clients pay peanuts they tend to attract servicing and outputs designed for monkeys. But usually, when you dig, you also discover issues of marketing, people and systems (added value not marketed well, senior people not spending any time on research, brainpower not valued or priced correctly, poor allocation of executive time etc. etc.).
Too often, as an industry, we seem very ready to accept the hyped pronouncements that we simply cannot measure “emotion”, or “value” or be “predictive” and hence the need to abandon all our old ways and move holus-bolus to new methods. (Nigel Hollis’ recent post on why conventional research can – usually – measure and predict value offers useful light in this context: http://tinyurl.com/2fggstp). Since, for most agencies, changing business models over-night is not on the cards, this simply engenders a sense of frustration and failure that is, in my view, unjustified. It also seems to divert research management’s attention away from the most important need: working out frameworks and processes to get better at design, analysis and reporting. The tragedy is that, in many companies we’ve observed, a few months spent addressing a number of relatively minor faults in existing approaches could yield huge dividends in improving what is delivered to clients.
Let me repeat – I’m a huge fan of the new MR methods that are coming on-stream at the moment. But I’m also convinced that the big competitive advantage for most market research companies lies in addressing some quite basic faults in research practices. Faults in how we deploy two of our key resources: brains and time. Companies that do that will be ready to embrace and integrate new technologies and methods properly, and take full advantage of their capabilities. Those that simply buy into the “doom and gloom” forecasts about the inherent failure of current research, will quickly find that the new techniques they rush to adopt will still fail to answer client’s questions because they have not been backed up by the necessary design, implementation and delivery standards.
If we have a failure as an industry – and overall I think we do a lot, lot better than we give ourselves credit for – it’s more about a failure to consistently provide quality in whatever we do, rather than primarily being about the kind of work we do. More a failure of research culture than research methods perhaps?