Getting started with Dynata+ Sample 

Dynata+ Sample gives researchers fast, flexible access to high-quality, research-grade respondents so you can field studies confidently, with transparency into feasibility, incidence, quality controls, and delivery. This guide helps you onboard quickly, set up your first project, and understand what to expect throughout fieldwork. 

Dynata+ Sample is a self-serve workflow to: 

  • Define your target audience and quotas. 
  • Estimate feasibility and expected completes. 
  • Launch and monitor fieldwork in real time. 
  • Receive high-quality completes supported by Dynata’s respondent quality standards. 

Your first project: Step-by-step 

Step 1: Define your objective and success criteria 

Before building: 

  • What decision will this data inform? 
  • Required sample size and key subgroups. 
  • Target LOI (length of interview) and device requirements. 
  • Must-have quality thresholds (e.g., min time, red-herring checks, open-end review). 

Step 2: Build your audience and quotas 

  • Select geography and demographics. 
  • Add screening criteria only when necessary. 
  • Set quota structure for your must-have cuts (age/gender/region, etc.). 

Best practice: Keep screeners concise. Long, complex screeners can reduce incidence and increase respondent fatigue. 

Step 3: Configure survey requirements 

Include: 

  • LOI and expected complexity. 
  • Device compatibility (mobile/desktop). 
  • Language requirements and localization notes. 
  • Any special routing constraints. 

Quality tip: Avoid “trap” questions that confuse genuine respondents. Use clear attention checks and logical validation instead. 

Step 4: Launch and monitor fieldwork 

During field: 

  • Watch pacing vs. quotas. 
  • Monitor incidence and drop-off. 
  • Check completes for speeders, straight-liners, duplicates, and low-effort open-ends. 

Operational tip: If a quota is slow, adjust: 

  • Loosen criteria slightly. 
  • Add feasible alternatives (neighboring regions, broader income bands). 
  • Rebalance quotas. 

Step 5: Close and review deliverables 

After field completes: 

  • Review quality summary (if provided in your workflow). 
  • Validate key cuts for balance and consistency. 
  • Document learnings to accelerate the next study (incidence, best-performing sources, LOI impact). 

Quality and methodology guidance

Questionnaire design 

  • Keep LOI aligned to your audience and topic. 
  • Use simple language, one idea per question. 
  • Randomize where appropriate (brands, attributes). 
  • Avoid forced answers for sensitive items; allow “prefer not to say” when relevant. 

Common issues and how to resolve them 

Low incidence / slow field 

Symptoms: few completes, high screen-out. Actions: 

  • Reduce screener length. 
  • Broaden targeting or adjust quotas. 
  • Recheck eligibility wording (ambiguous screeners can unintentionally exclude). 
  • Consider splitting rare audiences into separate cells. 

High drop-off 

Symptoms: starts are strong, completes lag. Actions: 

  • Reduce LOI. 
  • Simplify grids and remove repetitive sections. 
  • Ensure mobile experience is smooth. 
  • Place sensitive questions later. 

Unexpected data patterns 

Symptoms: unusual spikes, inconsistent brand awareness, too-clean distributions. Actions: 

  • Add/strengthen consistency checks. 
  • Review routing and answer options. 
  • Ensure randomization is functioning. 
  • Consider adding open-ends to validate.“ 

About Author

Alain C. Briançon, PhD, is Vice President of Research and Data Science at Dynata, leading AI and data science across market research methodology, advertising and brand solutions, and feasibility modeling. His current work includes applying generative AI, graph methods, and synthetic data systems to improve research design, data quality, respondent experience, and the speed and reliability of insights at scale. Previously, he led data science and AI initiatives at Profiles by Kantar, where he developed global AI-driven pricing and routing capabilities supporting a large commercial footprint, and at several technology organizations building real-time machine learning platforms and decision systems. He is the inventor on 90 issued patents, including 29 in AI and machine learning, reflecting sustained leadership in applied innovation and defensible IP strategy. He holds a PhD from MIT.