Getting feedback on your life science marketing initiatives
One of the simplest methods of gathering feedback is measuring some byproduct of your marketing actions. Measurement is a valid technique for judging the effectiveness of your marketing activities, but in many sectors of B2B life science marketing, there is a catch. Frequently, the volume of transactions isn’t high enough to make this type of feedback self-sufficient. If you only sell a couple of hundred instruments a year, or if you land only a couple of dozen contracts, making a direct connection between a particular email campaign, trade show effort or series of white papers and an increase in sales is a tenuous link at best. As the preclinical scientists like to say, a one-rat study won’t tell you very much.
In addition, it can be extremely difficult to separate the impact of marketing initiatives from the efforts of sales people, who are typically involved in closing contracts in most B2B life science sectors. Put another way, establishing a direct cause and effect relationship between marketing efforts and top line revenue is difficult.
As a result, companies often look farther up the sales funnel and measure other “conversions.” How many people opened that email blast? How many people brought the direct mail card by the tradeshow booth? How many people visited the landing page? These data can give you some sense of the effectiveness of your marketing campaigns.
But you have to actually implement these campaigns to measure them. This is fine if you already have two different email blasts and an established list of recipients; you can split your list and send each one to half your list. But what if you want to get some feedback on a campaign before implementation? What if you want to determine which of three messages would be most effective when marketing to research scientists? What if you want to understand which strategic positioning will prove more compelling to decision makers and decision influencers? In these cases, there is no existing campaign that can be measured, no alternatives whose performance can be tracked. This is where qualitative research is useful.
The purpose of qualitative market research
Why conduct qualitative research? “The purpose of research is to get an answer,” says David Nerz, president of MLN Market Research, a firm that provides qualitative marketing and dynamic group problem-solving research services.
“Qualitative research can be a very effective tool in many areas,” he goes on to say. “Areas such as new product development, positioning and messaging, brand perception, attitudinal studies, advertising and promotions and habits, usage and the dynamics of consumption, are all areas where the right kind of qualitative research can provide insightful answers to inform your decisions.”
But qualitative research cannot answer every question. “There are some areas where qualitative research isn’t appropriate, like determining market size, estimating sales figures or uncovering price points,” he continues. You must understand the strengths and limitations of whatever research method you choose; not all methods will be appropriate in all situations.
“Qualitative research gives you a peek into the market, a visceral understanding of the situation,” David says. This understanding of the market can be extremely valuable, but there are many pitfalls that can diminish the value of your information.
Pitfalls in conducting research in the life sciences
There are many avenues to collect information. The ease with which web-based surveys and polls can be created allows almost anyone to throw some questions in front of an audience. But the phrase “Garbage in, garbage out” has particular relevance in qualitative research. The mechanism by which you ask the question and collect the answers is only one small factor in successful research.
Other factors include: research goals, the type of research, the respondents you target, the questions you ask and the participation of your management team. All these factors will play a role in a successful research effort.
It is vitally important to clarify your goals before you begin your research project. Before you begin you must know what you are going to do with the information you gather, and what actions you will take once you have the analysis and answers you seek.
Sometimes the act of performing research can cloud an organization’s thinking about what will happen after the research. Research performed for research’s sake rarely has more impact than taking up space as a forgotten report filed in some drawer. David Nerz suggests, “Don’t ask if you aren’t going to act.”
Most companies seek feedback intermittently – only on a project basis. But research can support deeper insight if it is conducted regularly. The baseline established by previous results can allow you to sense smaller changes in the direction of the market than you could otherwise determine. This kind of insight is only possible if you have the discipline to conduct your research in a way that allows meaningful comparison between research results obtained during different time periods.
Whether or not you envision your individual projects as part of a larger whole, the timing of individual research efforts will play a large role in determining the validity of the final results. Many a research project has been rushed to coincide with some external event – a trade show or a new product launch. If the goals are clear, research can be accelerated on occasion, but speeding up might lead to errors. In some cases, bad data will be worse than no data.
Types of research
There are many different types of research. Qualitative research and quantitative are the two main kinds, but there are many subdivisions, including eye tracking research, emotional response research, buyer decision processes research, mystery shopping, customer satisfaction research, online panels, audits, and test marketing, just to name a few.
Your research goals and the timing of the research will strongly influence the type of research you choose.
Recruiting the right respondents
“Recruiting is the lifeblood of research,” says David Nerz. With decades of experience, David knows first-hand the importance of recruiting the correct respondents. Often, the ideal candidates are not easy to identify or recruit. In the life sciences, decision makers can be hidden inside corporate departments, with non-obvious titles.
The fact that they are difficult to get to highlights the importance of the proper recruitment of the right audience. After all, you wouldn’t ask the clerk at the local grocery store their opinion of your banner ad, just because they are easy to find. It doesn’t matter what they think – they are not the intended audience. If your primary audience is bench scientists, you shouldn’t be talking to lab administrators, or service technicians, or salespeople.
“Taking shortcuts in recruitment is very tempting, but can get you into trouble quickly,” David continues. Improper recruitment is an insidious problem, because the data you collect won’t remind you that you’ve gotten answers from the wrong people. A 70% favorable rating will still look like 70%, even though it is measuring responses from the wrong people. Garbage in, garbage out.
The questions you ask
The purpose of research is to get answers but the way you ask your questions will determine the types and accuracy of the answers you get. As an example of how a question’s wording can predetermine the answer, we need only look to the political arena and “push polling.” Push polling is a technique used to seed messages among the electorate under the guise of conducting a supposedly neutral research poll. A question such as: “Are you aware that Candidate X supports an amendment to denigrate mothers and block the consumption of apple pie?” is not designed to elicit answers, it is designed to spread a political message. This is clearly a fictitious example, but it shows how the way you word a question can determine the answer you get.
Pay special attention to the wording of your questions. Asking: “How likely would you be to take action based on this email?” may sound very similar to “What action would you take, based on this email?” But the latter question presupposes that the respondent will be taking some form of action; the former question does not. The wording of questions will determine the validity of the information you gather.
Look for possible misinterpretations that could yield a skewed answer. Even the order in which you ask questions can have an impact on the answers. That is why the order of presentation for different alternatives is often randomized, to avoid positional bias.
How important, how believable and how compelling?
To be effective, marketing messages must meet at least three criteria – they must be important, believable and compelling. If you are conducting research to determine the potential effectiveness of a marketing initiative, these are often the most important questions you will ask:
- How important is this message/benefit to you?
- How believable is this message/benefit?
- How likely would you be to take action based on this message/benefit?
Participation of the management team
Research efforts can yield valuable information. To be truly useful to your organization, your management team must participate. “Decision makers must participate, or they won’t accept the results,” says David Nerz. Their participation should take several forms. First, they must help establish the research objectives. Second they have to be active in the process. For focus groups, this means that they should be “behind the mirror,” in order to get a visceral understanding of the respondent.
Studies indicate that the average business never hears from 96 percent of its unhappy customers. Customer feedback is vital, and the management team must participate to get the full effect.
Respect the answers you gather
If you have done your homework properly, the answers you gather will be valid – and valuable. Once the answers are obtained a common mistake is to judge the validity of the research based upon what the data says. David Nerz points out, “You must accept the ‘bad result.’ It is not gospel just because it aligns with what you want to hear.”
The reason you perform qualitative research is to get answers. Throwing out an answer because it doesn’t agree with a theory is the hallmark of bad research – both in science and in market research. “What’s most important is to get the truthful answer, not the answer that the management team wants to hear,” says David Nerz.
- There are many ways to get feedback on your marketing initiatives.
- Measurement of the “conversions” generated by your existing initiatives is one way to get feedback. This requires existing initiatives. To test planned initiatives, you must perform qualitative market research.
- The purpose of research is to get answers. But qualitative research cannot answer every question. Some questions, like determining market size, estimating sales figures or uncovering price points are not suitable for qualitative research.
- There are some common factors in successful market research.
- Clarify your goals before you begin. Understand what you will be doing with the answers you receive. As David Nerz says, “Don’t ask if you aren’t going to act.”
- Pay attention to the timing of your research. Research that is rushed can lead to flawed data.
- Recruiting is the lifeblood of research. Getting the right respondents is one of the most important steps in successful research.
- The way you ask a question will determine the answers you get. Pay special attention to the wording of your questions.
- The three most important questions you will ask are typically: How important is this message/benefit to you? How believable is this message/benefit? How likely would you be to take action based on this message/benefit?
- Participation by the management team is important. They must set the research objectives and they must be active in the process.
- If you set up your research properly, you will get good information. You must accept a bad result. Throwing out an answer because it doesn’t agree with a theory is the hallmark of bad research – both in science and in market research.