Many “best practices” in choice modeling begin from a reasonable observation: people rarely evaluate every option and attribute in a fully compensatory way. Instead, they simplify. They apply cutoffs, screen options, ignore attributes, and rely on heuristics that reduce cognitive effort.

This observation is not controversial.
Where best practices often go wrong is in what they do next.

A common justification follows: if respondents simplify, then the study must guide them toward more structured, deliberate decision making. Simplification is treated as a deficiency to be corrected, rather than a behavior to be understood. At that point, the study stops measuring the market and starts reshaping it.

This is not a technical flaw.
It is a role error.

Simplification Is Not a Defect

Simplifying choice strategies are not evidence of poor decision making. In many markets, they are rational, adaptive, and efficient.

Consumers simplify because:

  • attention is scarce,
  • many attributes are non-binding,
  • tradeoffs are not always explicit,
  • and decision costs often exceed the value of precision.

Screening rules, cutoffs, and attribute ignoring are not noise injected into the data. They are signals about how choice actually occurs under real constraints.

Treating simplification as a problem to fix assumes an idealized decision process that may not exist in the market being studied.

The Hidden Assumption Behind “Best Practice”

Many best-practice justifications quietly embed a normative assumption: that respondents should behave in a particular way—often a fully compensatory, tradeoff-based manner—and that deviations from this ideal indicate error, fatigue, or low data quality.

This argument is not about whether structured techniques work. It is about departing from economic theories of consumer choice and quietly substituting an imposed decision strategy, then presenting that substitution as best practice. Choice models may represent how consumers decide; they are not entitled to redefine how consumers ought to decide by default. When a study enforces a particular choice logic, that enforcement is no longer a methodological detail.

Once this assumption is accepted, a series of corrective moves follows naturally:

  • requiring respondents to explicitly screen attributes as “must have” or “unacceptable,”
  • allowing respondents to modify the choice task by eliminating large portions of the design space,
  • breaking decisions into staged steps that pre-define how tradeoffs are meant to occur,
  • treating deviations from that structured path as inconsistency rather than information.

Notably, the same logic that treats simplification as a defect also selectively permits it, but only in forms the study can formalize and control.

Each step appears reasonable in isolation. Taken together, they enforce a decision strategy chosen for the convenience of the method rather than the reality of the market.

The study is no longer asking, “How do people decide?”
It is implicitly asking, “How can we get people to decide in a way the model can represent?”

Strategy Enforcement vs. Strategy Recovery

This distinction is critical.

The purpose of a choice study is not to teach respondents how to decide.
It is to recover how decisions are actually made.

When a study enforces a preferred strategy, however sophisticated or well-intentioned, it ceases to be purely inferential. The preferences it recovers are conditional on a decision process that may not exist outside the survey environment.

This creates a dangerous illusion:

  • model fit improves,
  • results look cleaner,
  • outputs appear more interpretable.

But the clarity comes from constraint, not insight.

The Governance Implication

Confusing strategy with error is not a data quality issue. It is a governance failure.

When best practices elevate a particular decision theory to default status, they narrow the space of admissible explanations before analysis even begins. Simplification becomes something to eliminate rather than something to explain.

A choice model should be judged not by how cleanly it enforces a strategy, but by how faithfully it recovers the ones that actually exist.

Anything else is not measurement.
It is correction.