A common best-practice claim in conjoint analysis is that increasing the number of attributes leads to cognitive overload, satisficing, and degraded data quality. These claims are typically presented as psychological limits on what respondents can process.

However, experimental studies that explicitly varied attribute count—sometimes far beyond typical commercial practice—found that aggregate attribute effects remained stable within the tested designs. Importantly, these studies evaluate observable choice behavior and estimated effects, not the psychological constructs often used to justify attribute limits.

What the Study Explicitly Tested—and What It Did Not

What was studied

The paper experimentally varied the number of attributes included in conjoint profiles, including conditions with substantially higher attribute counts than those typically recommended in practitioner guidance.

Key features of the design include:

  • Systematic increases in attribute dimensionality while holding task structure constant.
  • Explicit separation of masking effects from satisficing behavior.
  • Evaluation of aggregate attribute effects rather than individual-level utilities.
  • Use of standard online respondent samples.

Psychological constructs such as fatigue, cognitive burden, or overload were not measured directly. Instead, the study focused on observable choice behavior and the stability of estimated attribute effects. As a result, claims about psychological strain remain explanatory assumptions rather than tested mechanisms within this design.

Read the paper

What they found

  • Observable behavioral changes occurred as attribute counts increased.
  • Response times shortened modestly.
  • Attention became more concentrated on a subset of attributes.
  • Some attributes were ignored more frequently.

These behavioral changes are consistent with satisficing, but also with alternative explanations such as learning, strategy stabilization, or selective attention under repeated tasks.

At the same time:

  • Aggregate attribute effects remained stable across conditions.
  • No sharp breakpoint, collapse, or regime shift in estimates was observed as attribute counts increased.
  • Relative ordering of effects was preserved even in designs with attribute counts well above common practice.

What the paper does not show

This study does not establish prescriptive design limits or validate psychological explanations for observed behavior. Specifically, it does not establish:

  • an “optimal” number of attributes,
  • that satisficing never occurs,
  • that individual-level utilities remain reliable at high dimensionality,
  • or that complex designs are sufficiently powered by default.

Its conclusion is narrower: Within the tested designs, increasing attribute count did not cause collapse of aggregate conjoint estimates.

Where Common Attribute Limits Go Beyond the Evidence

In contrast, much practitioner guidance states or implies that:

  • assumption presented as guidance: attribute counts should be strictly limited to avoid cognitive overload,
  • assumption presented as guidance: higher attribute dimensionality leads to dominant satisficing behavior,
  • assumption presented as guidance: simpler profiles are generally safer regardless of inference goals.

Notably, this guidance often:

  • cites concerns about fatigue or burden without direct measurement,
  • treats behavioral indicators as evidence of data degradation,
  • and presents conservative attribute limits as general rules rather than context-dependent design choices.

This contrast suggests that the disagreement is not really about whether satisficing exists, but about what constraint should govern conjoint design decisions. Much of the confusion arises when respondent burden is treated as the primary limiting factor, rather than the strength of evidence required for the decision at hand.

Why Unmeasured Psychological Claims Become Design Constraints

The discrepancy appears to stem from three factors:

Different levels of measurement
Psychological constructs such as burden or fatigue are frequently invoked to justify attribute limits, but are rarely measured, validated, or falsified within conjoint experiments. Observable satisficing behavior is measured, but remains behaviorally ambiguous and consistent with alternative explanations such as learning or strategy stabilization.

Heuristics standing in for evidentiary analysis
Attribute limits are often justified through precautionary narratives rather than explicit evaluation of whether increased dimensionality materially affects the estimand being recovered.

Modeling practices that mask insufficiency
Hierarchical Bayesian models tend to produce stable-looking estimates even when information is weak, which can make underpowered designs appear well-behaved and reduce pressure to articulate evidentiary requirements explicitly.

The correct takeaway

The appropriate conclusion is not that “more attributes are better,” nor that attribute limits are unnecessary.

It is simply this: Empirical evidence does not support treating attribute count as a psychological failure threshold that automatically degrades aggregate conjoint estimates, within the ranges tested.

Complexity is therefore not “free”: higher-dimensional designs require commensurately stronger information support to justify the intended inference.

Design decisions still depend on:

  • the number of parameters,
  • the target estimand,
  • and the level of inference required.

Discussions of attribute count are more productive when framed around evidentiary sufficiency and estimands, rather than assuming fatigue-driven failure as a default.

This tension highlights a broader design principle: conjoint quality depends on aligning design complexity with evidentiary requirements for the decision at hand, rather than enforcing fixed limits justified by unmeasured psychological narratives.

Bansak, Kirk and Hainmueller, Jens and Hopkins, Daniel J. and Yamamoto, Teppei, Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments (October 30, 2018). Stanford University Graduate School of Business Research Paper No. 17-33, MIT Political Science Department Research Paper No. 2017-16,