Testing Best Practice Under Pricing Pressure
Standard pricing best practices can quietly fail when price and pack size interact, if that risk is tested directly.

Situation:
A food & beverage brand faced a forced price increase due to rising ingredient costs. The initial analytic plan followed standard pricing best practices, a common and reasonable starting point in these situations.
Complication:
In portfolios where price and pack size interact, this approach carries a known risk: it frequently produces inverted or unstable demand curves that undermine confidence in the result.
Rather than rejecting the default method, the analysis had to be structured to test it directly alongside an approach engineered for the specific constraints of the portfolio. This made the risk embedded in best practice visible in the data rather than implicit in debate.
The Outcome:
The comparison surfaced an economic violation in the standard approach. The resulting price ordering, with a single pack priced above a multi-pack, disqualified the output from decision use. Outputs of this kind cannot safely enter an executive decision process.
The alternative produced a stable, interpretable price signal that could be used without qualification.
Leveling Up Monetization Strategy
Monetization research fails silently when model structure obscures the difference between signal and artifact.

Situation :
A study aimed to understand how players value core game modes, such as Capture the Flag, when delivered through different monetization formats including base game, downloadable content, and subscription. The goal was to inform monetization strategy by identifying which combinations actually drive value.
Complication:
Standard conjoint analysis failed in this configuration. Because the same game modes appeared across multiple formats, the model fragmented each preference across separate parameters, introducing multicollinearity and unstable estimates.
What appeared as noise in the output reflected a deeper structural problem that downstream readers could not reliably distinguish from signal. The model could not represent how players actually evaluate content across formats, creating a risk that monetization guidance would be driven by artifacts of the model rather than player preferences, without obvious signs of failure.
Nothing in the standard workflow required this default structure to be questioned or revalidated once it produced plausible-looking output.
Outcome:
The analysis was restructured to link game modes across formats while estimating the value of monetization format separately. This stabilized the output and produced preference estimates that aligned with how players experience content and how strategy teams evaluate monetization trade-offs.
Plausible output is not evidence of a valid decision model when no one is structurally responsible for validating it.
Unlocking Portfolio Pricing
When no one owns the decision definition, analysis defaults to outputs that are easy to produce rather than useful to act on.

Situation:
A software company set out to optimize pricing across a multi-product portfolio. The research effort relied on standard conjoint software and accepted best practices, with the expectation that modern tooling and simulation outputs would naturally support portfolio pricing decisions.
Complication:
As execution progressed, the work drifted toward abstract importance outputs that were easy to produce but did not translate into portfolio pricing action. When the team asked the software provider for help, the response reinforced that direction: focus on abstract importance. It was presented as the output they “needed,” even though it did not answer the portfolio pricing question.
At that point the team brought in governance support, not to add sophistication, but to regain decision leverage. That is when the deeper issue surfaced. No one had been responsible for defining what decision-grade evidence meant for this decision, so there was no standard for what the analysis had to produce, what would count as defensible, or how results would be used to choose pricing moves.
Outcome:
The project was retooled around a clear decision contract. The model and simulator were designed to match the portfolio decision space, enabling revenue-based scenario testing, product mix simulation, and explicit trade-offs that could be explained and defended.
What had been headed toward abstract reporting became decision-grade portfolio pricing the team could run, use, and support up the chain.
The failure was a missing owner for decision definition, not a missing feature in software.
If this sounds uncomfortably recognizable
Most analytic failures don’t announce themselves. They arrive as reasonable defaults, polished outputs, and accepted best practices that quietly fail to support the decision they were meant to inform.
- Who is responsible for defining what decision-grade evidence means in your organization?
- What decision would actually change if your current analysis were wrong?
- Can your outputs be accepted without ever being used to choose between real options?
- Where would structural failure show up first if it existed?
- Who is accountable for surfacing that risk?
When these questions matter, conversations tend to be more useful than proposals.
