Lately, there’s been a lot of back and forth in the research blogs on the topic of ISO standards being introduced to the US. This has stoked the long-running debate on competency and certification in our industry, ostensibly to ensure that those using research are assured of a certain quality of service. It also leans to a yearning amongst researchers where we’d like our craft seen as a ‘proper’ profession taking its place with the lawyers, architects, accountants, and their ilk.
After all, no-one in their right mind would engage a professional who was not qualified under their particular society’s standards and requirements, irrespective of the fact that unqualified doctors, lawyers, and others are legally not allowed to practice anyway.
One of the main reasons people don’t use unqualified professionals is not because they lack the necessary intellect or piece of paper, but because of the potential damage you can incur by not using a qualified professional. Engaging an uncertified accountant could cost you dollars in excess tax (or a visit from the Inland Revenue for not paying enough!). The wrong architect could have you the proud owner of a condemned property. And the consequences of being seen by an untrained surgeon don’t bear thinking about.
The problem with using poor or unprofessional MR is that it’s not likely to have a particularly serious consequence in any one case. Yes, biased market measurement data could give you a wrong fix on your brand share and potentially lead you to misallocate marketing funds. However, very few marketing decisions are based on MR alone; they are usually founded on mix of information such a sales force feedback, distributors’ records, or input from the ad agency amongst others. So, in the bad cases, there is a fair-sized safety net when ‘rogue’ MR data raises its head.
No, the real consequence of employing unprofessional, inexperienced, or under-trained MR is one of lost opportunities. ‘Poor’ research tends to be somewhat anaemic or vanilla; the questions weren’t imprecise or badly phrased, they were just the wrong questions in the first place. The findings were not inaccurate, in fact they were backed up by our other sources – but that’s just it – they only told us what we already knew!
Lost opportunities translate in to lack of competitive edge, sluggish response, dilution of equity, and the weakening of price premiums. Any research which does not add to learning and understanding, not matter how incremental, can be said to have failed as ‘professional’ advice or guidance.
We need to create not a just fear of the unknown, but a fear of not knowing there is an unknown. Part of the armoury could be a list of marketing success stories where research was demonstrably a main contributor. Clients who buy research as a commodity significantly decrease their chance of finding one of those ‘nuggets’. Specifically, if the pressure was put on research-buyers by their internal customers to ensure the buyers commissioned opportunity-revealing research, then there would be an incentive. This message needs to be targeted at the broader marketing and senior management audience – a task at which, to date, our industry has not excelled. (If nothing else it may help dilute the impact of the procurement process and its numbing effect on creativity)
I very much support anything that will establish and reinforce relevant certification in the business and thus raise research’s professionalism for the benefit of all. But, collectively, we need to continually emphasise real opportunity costs and implicit risks as much as qualifications, experience, and expertise.