8 Key Characteristics of a Quality Research Partner: 2. Sound Methodologies

melanie courtright

By: Melanie Courtright,
EVP Global Research Science & Data Strategy


About this series: Digital technologies have dramatically lowered the cost of research, ushering in important new opportunities to gain and apply research insights. Unfortunately, these cost structures can also lead to compromises in quality – which in turn can compromise the accuracy and reliability of research insights, and ultimately the business decisions they support. This series presents eight of the most important characteristics to look for in a quality research partner.

My last post covered sourcing breadth and depth, discussing the keys to quality in sourcing a sample. In this post, we’ll consider sound methodologies in conducting a survey that are crucial to the quality – and therefore reliability – of the data. They include best practices in:

Targeting individuals for participation
Assigning targeted individuals to the survey
Screening prospective participants
Maintaining quality control in the survey;
Ensuring representative mobile participation

Targeting.

There are five different data resources a provider can use to target individuals for participation in any given survey:

1. Known information from the provider’s recruitment process, for example, a person from an airline rewards program;

2. Previously asked and stored information, such as demographic characteristics and preferences, gathered via questionnaire by quality-conscious providers;

3. Gathered data from real-time screening questions relevant to the particular study;

4. Observed behaviors such as online searches, website visits and other actions; types of devices used; and geotracking data;

5. Appended data such as voter registrations, data from Mosaic or Prizm, and other sources.

Different targeting tools have different influences on the selection of participants, which in turn has implications for your research results. Over-reliance on real-time questions, for example, can result in people with no hope of qualifying and entering the screening process, only to be disqualified from survey after survey, which reduces their interest and engagement. Research projects involving narrowly defined or hard-to-reach target groups may require several of the five data resources listed above.

Your research partner should be transparent about the kinds of targeting data they use and their ability to leverage multiple sources. They should also be able to explain how their options and resources can influence your results. If the wrong people enter the survey, they will likely provide inadequate or uninformed answers that will create bias.

You should also ask about a provider’s capabilities with appended data, because of the intriguing new possibilities in research studies it opens up. Marrying panel data with different categories of third-party data – automotive, financial, technology, transactional, and others – creates new, detailed data sets that allow for more highly targeted survey delivery, more detailed and revealing questions, and even triggered surveys.

Of course, some kinds of research should be conducted without targeting, such as market sizing studies. For these kinds of studies, your provider should be able to tell you how they go about selecting participants without targeting.

Return to top

Assignment and the Survey Router

The next step in the process is to move qualified sample into the survey, which is accomplished by a survey router. Routers move targeted sample into specific surveys efficiently, but aspects of their design – how they handle randomization, weighted randomization and other factors – can influence survey assignments, and therefore results. Your provider should be able to explain how their router works, and the potential effects it will have on your data.

Well-designed routers can make other important contributions to quality:

  • Security and data tagging, such as confirming devices and IP addresses;
  • Improving the participant experience by avoiding over-solicitation and under-qualification of participants, which can result in too many people being rejected and losing interest;
  • Reducing self-selection bias by preventing prospective participants from seeing the survey title and incentives until they are in the study;
  • Enforcing business and participation rules, such as preventing those who have just completed a survey from joining a second.

Your research partner should be able to tell you whether their router performs these functions.

Return to top

Screening and the Screener Questionnaire

The principal role of the screener is ensuring that the right people take the survey. In my experience, puzzling data can usually be traced to issues in screening. That’s why it’s important to understand how a screener questionnaire is written and structured to qualify participants, because that ultimately controls who will be starting the survey.

The screener must ensure that the people who take the survey actually have the knowledge, experience and interest to answer the questions. In B2B surveys, that may entail confirming that a job title carries the expected responsibilities, or that a person with knowledge of budgets is also able to share that information. For consumer surveys, it may be a matter of separating people who simply use a product or service from those who actually care about it.

I could easily devote an entire post to screeners, but here are some quick fundamentals for good screener questions:

  • Make sure they can be reasonably answered by the participant;
  • Ask just one question at a time;
  • Use clear categories;
  • Avoid scale overlap;
  • Don’t use leading words or affirmations, and avoid yes/no questions;
  • Be sure your questions are clear and precise;
  • Avoid assumptions.

In general population studies, the screening process – specifically the number of potential participants who screen out, and why — can help researchers understand the potential size of the market, so make sure your partner can provide that information. Finally, the screening process provides one last opportunity to verify the accuracy of data elements critical to the targeting phase.

Return to top

Data Cleaning: In-Survey Quality Control

Every survey should include some quality control questions to identify respondents who may not be qualified or adequately engaged to provide careful, reliable responses. Quality control efforts can include:

  • Open-ended questions, to identify respondents who are disengaged;
  • Questions about unlikely or low-incidence items or events, to identify respondents who overstate or exaggerate;
  • Questions (including multiple questions) that allow respondents to agree with contradictory statements, such as “I am a loyal brand buyer,” and “I am a value price shopper”;
  • Tracking the time spent completing the survey, to identify respondents who speed through (although younger participants are skilled on their devices, and move faster);
  • Consistency checks through similar questions near the beginning and end of the survey.

Best practice is to include at least three of these kinds of quality control questions in every survey, and remove participants who fail two or more of them. The result will be cleaner, more accurate and more reliable data.

Return to top

Final Consideration: The Mobile Imperative

Mobile device usage has grown with such remarkable speed that a mobile-first research strategy has truly become a requirement. Our recent Dynata Global Trends Report reveals that smartphone ownership is truly ubiquitous among Millennials (98%) and Gen X’ers (95%) across the globe. Adoption is almost as high among Baby Boomers (82%) and the Silent Generation (71%).

Research has been profoundly affected by these high rates of mobile device ownership. In the first quarter of 2019:

  • 34% of all surveys globally were taken on smartphones, and another 8% on tablets;
  • For younger cohorts, 56% of surveys were taken on smartphones, 3% on tablets;
  • Even for baby boomers between 55 and 64, 23% were taken on smartphones, 12% on tablets.

With these significant numbers, it’s clear that surveys must be adaptable for convenient completion on a mobile device as well as a desktop computer. Otherwise, respondents will disengage or drop altogether, with disproportionately higher abandonment rates among younger generations, resulting in the data being inappropriate for business decision making.

I hope this post provides useful insights into some of the ways that survey methodology can impact the accuracy and reliability of research results. In my next post, I’ll discuss the critical importance of transparency in a research partnership.