Positioning science 3: GTM & N=1 thinking

Summary: Your positioning is a bet — why not make it a rigorous one based on the fundamentals of positioning, not superficial ’testing'?

Article illustration

How can you encourage your company to focus on building a winning position, not mere incrementalism?

It comes down to clarity about the N=1 experiments you’re running.

We’ve covered a lot of theory, and it’s easy to theorize about what positioning approach may or may not work. But ultimately, we need to put the theory to the test, and we need to do so rigorously.

N=1 positioning experiments are how we do that.

(Why “N=1”? Medical studies often use “N” as a shorthand for the number of participants in the study, e.g., N=100. In business, there’s usually just you, hence N=1.)

This approach requires a new, more qualitative, more right-brained way of thinking about experiments.

The problem with “testing”

When I talk about experiments, I’m not talking about routine “testing” here, as most folks would understand it.

If you’re in B2C and you can run meaningful, statistically significant tests against (say) the direct-response advertising you’re doing, great, test your heart out.

Those experiments are more like N=10,000+ experiments, where you’re trying to optimize for some aggregate behavior.

Positioning in B2B is not like that.

It’s much more fundamental. You’re still running an experiment, sure, but you have to prove it out one deal at a time and one positioning cycle at a time.

The problem is that B2B startups still want to use B2C tactics. But the harsh reality is most B2B orgs lack the volume or statistical thinking to do this well, or even at all. For example, there’s often a:

  • Lack of volume: Small companies and folks dealing with small volumes can’t test, anyway.
  • Lack of insight: Those that can, often get trapped in left-brain parts and pieces analysis. That’s fine if you are genuinely optimizing what works — i.e., you’re hill-climbing the right hill — but positioning is about picking the hill, not tweaking how you climb it.
  • Lack of org support: When folks do get the volume and do have the resources to take bigger swings, the political and organizational challenges are usually such that they stop meaningful tests from happening and instead result in even more minute incrementalism. (The correct discipline-based approach people discovered is, unfortunately, simply intolerable for most orgs.)

So what do we do? Give up on the empirical method altogether? Not at all — we just need to understand that the qualitative matters much more.

Incrementalism

Unfortunately, instead, we get a lot of incrementalism and pseudo-“testing,” which is where a lot of positioning work goes to die.

Marketers put their heads together, come up with a clarified position and a new narrative and then… tweak the homepage? Test some ads? And nothing much changes.

This kind of risk-averse approach is understandable if you’re already on a rocket ship and don’t want to mess with what’s working — that’s our ‘ride it’ strategy. That’s where you’ve found the right mountain, and you’re optimizing your marketing, to the extent you can, to find the fastest way to the top.

But science-based positioning is about ensuring you’re on the right mountain to begin with. It’s about winning, not incrementalism. And for venture-backed B2B startups, you need to be betting on a big mountain, one deal at a time, not methodically scaling a small hill. That means placing bigger bets with bigger potential outcomes.

“How do we test that?”

Ok, big mountain, got it. How do we know if we’re on the right one? How do we test that?

Again, when considering a new position, it’s tempting to think, “We’ll run some A/B tests!” Or maybe not even A/B tests — maybe you’ll just roll out some new collateral and hope for the best.

The trap here is reaching straight for the quantitative, not the qualitative, even if the qualitative is what gives you real N=1 insight and the quantitative just gives you meaningless precision around superficial metrics to two decimal places.

I harp on this not just to have a dig at marketers doing silly stuff, but because it highlights a fundamental way of thinking that we all need to break out of.

One of the profound things about the hierarchy of attention is that the modern right-brain/left-brain split fundamentally affects how we think about science and experiments in the first place (or, indeed, as McGilchrist philosophizes at length, about the future of our society).

The problem is left-brain dominance. Left-brain dominance puts the parts and pieces over the whole; it has a false sense of precision and certainty; and it can’t accept ambiguity.

This left-brain dominance plays out in much of marketing —and product and sales — today, where there’s endless false precision around superficial metrics and a failure to step back and ask, is our message any good? Or better: what did buyers actually do? Or better still: do we have credible qualitative evidence for what we’re about to do?

I’m not against measurement — counting stuff is very helpful! I actually love measurement, analytics, and A/B testing on a tactical level. (I once got so into analytics and A/B testing that I tried writing books about both topics many years ago — it’s a bad habit.) And I get that, if you’re doing direct-response advertising and get no response, guess what? You’re going to test something else. And maybe that something will work!

Likewise, if you roll out product features and want to see how much they get used, it makes a big difference to have meaningful instrumentation there.

That’s all fine — measure and test away.

But people will get so caught up in reading the tea leaves around a ’test’ that improved some largely arbitrary metric some small amount that they’ll forget to step back and see what buyers — not ad clickers, not visitors, and not superficial “conversions” — actually do and what we can learn from that.

Where testing matters

When it comes to positioning, we need to design a far more expansive N=1 experiment than simply tweaking headlines. We need to think bigger both in terms of ambition and in the quality of evidence we need.

Therefore, it pays to know what a meaningful experiment looks like.

Doing A/B testing over the years and following the optimization industry trained my brain in an interesting way. It turns out what we thought would matter often didn’t. What we thought would win often didn’t. What we thought users would care about was completely wrong.

It taught me to stop worrying — beyond a certain threshold of competency — about almost all of the superficial stuff and focus on the actual value we’re selling and delivering to users.

And that was hugely liberating.

Don’t get me wrong — having a high bar for competency matters, especially in crowded markets. But beyond that, knowing that “testing” is often meaningless to users and impossible for B2B orgs to do at all opens your mind to what is truly meaningful instead.

It opens the door to a new way of thinking about the experiments you run — ones that are far more considered and far more de-risked through greater insight and understanding.

This is a world where, to quote Ernest Rutherford, “If your experiment needs statistics, you ought to have done a better experiment.”

Real B2B positioning experiments

Real B2B positioning experiments are different. They’re about the right-brain whole. They use real qualitative evidence from talking to real people. They take risks commensurate with the reward they’re chasing. They operate with a nuanced understanding of the fundamentals at play. On the venture-backed side, they are about taking meaningful steps to a genuine winning position by finding the right mountain to climb. (On the bootstrapped side, your risk profile might look very different, so bear that in mind!)

This is, again, why positioning is so powerful as a concept. It helps us frame the N=1 experiment we’re running with the company itself. But only if we keep focused on the macro of positioning, not just the micro.

Macro vs. micro positioning

The point of positioning, typically, has been to find a niche where you can be #1. If you’re the best in some segment, you can then get the compounding growth flywheel going, where your early customers make great reference customers for your next cohort of prospects, who in turn make sales after that easier again, and so on, until you eventually win the segment.

But only if your growth compounds. And that’s why segment or niche leadership matters so much, in a conquer-the-market sense — the only way you can generate enough momentum to go after the bigger market is to first dominate a niche.

It’s worth emphasizing that the whole point of your product — and your position — really is to win in your market. You would think this would be obvious to everyone, particularly in startups, but it’s not.

A lot of positioning I see out there, however, is essentially micro positioning done by product marketers who have been tasked with splitting hairs between their company and competitor X or Y. I realize someone has to do this, and high-five to all you PMMs out there in the trenches trying to improve your product or company’s positioning, but this micro positioning doesn’t win markets.

If your market is growing fast enough that you’re content to grow with it, great. (Especially if you’re bootstrapped.) But if you’ve raised oodles of money, being a me-too player is probably not enough.

The problem here is competing to be the same, not competing to win. It’s necessary to stay competitive — and that is important — but it’s insufficient to beat your competitors. You can analyze all your competitors, but they can analyze you, too. You can talk to your customers about how they want to ‘save time’ or ‘make more money,’ and your competitors can and will, too.

If you’re all indexing off the same demand, you’ll all say the same thing, and you’ll all compete in the same way.

Again: fine to grow with the market; insufficient to win it.

These micro positioning approaches were once innovative; now they’re table stakes. But it’s still innovation that matters — innovation in product, innovation in experience, innovation in GTM. And that’s the real question, the real acid test: what innovative bets are you making? What’s the R&D pipeline that drives those bets? Why (and for whom) do these specific bets matter? And, when you drill down to that N=1 person, that specific human you have in mind, what position has all your work amounted to?

Consider the difference between blind ’testing’ and this sort of N=1 thinking, where you can conceptually drill down to the specific person who has made a specific change thanks to your product, where you’ve created a specific position in their mind. It’s that application of your own attention that’s so vital, and so often missing when we defer only to what’s quantifiable.

Positioning leadership

It’s this bigger-picture macro positioning that’s more relevant to winning your segment, and that’s the responsibility of the company’s leadership, particularly the CEO or founder. Their first job — or your first job, if that’s you — when it comes to positioning is to articulate the experiment you’re running to find a winning position in your niche or segment.

The reason this is leadership’s responsibility is that they don’t get fired for being wrong.

You have to understand that, as a leader, everyone below you is incentivized to minimize their own personal risk.

And that’s entirely rational on their part — it’s their paycheck, their career, and their day-to-day they’re worried about. Of course they want to see the company succeed, but that’s why incrementalism happens. “Hey, don’t blame me, we tested it!” is fine for not screwing up, but it rarely helps you find a winning position. It’s only the CEO or founder who has the context, authority, and permission to place the big macro bets.

Part of that might be riding the wave on a big vision, but part of it is focusing narrowly on a niche, too, right down to that N=1 customer you can move from/to your new way; on the N=1 segment you can dominate; and on the N=1 company trajectory you can set.

Your turn

  • What’s your current N=1 positioning bet? What mountain are you climbing?
  • What’s your next experiment that will prove whether you can, for example, dominate a niche?

Premature scaling

I’m a big believer in placing meaningful positioning bets, but let’s remember why “testing” culture exists in the first place — risks are real! Sometimes YOLOing it isn’t being crazy brave; sometimes it’s just being crazy, and testing provides a safety net of sorts.

This is where Gall’s law comes in: “A complex system that works is invariably found to have evolved from a simple system that worked.”

You need to prove out the simple system first, e.g., the N=1 for one customer, not an entire market. And this can be hard when there’s tremendous pressure to scale.

If the root of all evil in software development is premature optimization (thanks, Donald Knuth), the root of all evil in venture-backed startups is premature scaling.

Capital compounds, and it will compound what’s working and what’s not. If you haven’t proven out your model with a successful N=1 niche experiment, where you can reliably turn out one, then another, then dozens, and then perhaps hundreds of successful customers, then more capital won’t help you; it may in fact hurt you because now you’re trying to win a market before you can reliably win a segment.

That’s why it’s not just formal N=1 experiments that matter, but N=1 thinking. It keeps us honest by thinking through the whole experiment loop. It stops us from believing our own hype and helps us drill down to actual, tangible customer outcomes.

But how do you run these bold experiments without betting the farm?

They need to be based on fundamentals.

Taking positioning bets

Let’s go back to first principles for a moment. Positioning exists for two reasons:

  1. Because demand exists.
  2. Because competition exists.

Companies want to “position” their products closer to customer needs (i.e., demand), so their value proposition is stronger — “You’ll get the exact outcome you need!” — thus generating more buyers.

Companies also want to position their products to differentiate themselves from the competition (“Only we offer X!”) so they’re not just competing on price. (What often goes unsaid is that companies often want to deny the competition differentiation too, so a lot of work goes into competing to be the same.)

To find a winning position in your segment or market, therefore, you have to be either:

  • Closer to the demand: by either doing a better job of satisfying that demand (innovation) or being first to mind when someone experiences a need (brand).
  • Further away from the competition: by either having a more differentiated product (innovation) or a stronger go-to-market motion (brand).

Yep, it’s innovation or brand. Which is obvious, right? But what’s less obvious is finding a unique position where those things matter more.

That’s the art of positioning, and the art of N=1 experiments is doing that deliberately and rigorously, taking as much risk out of the process as possible.

That’s what the four strategies are intended to help you with, along with going to market with a clear narrative and a strong clarity strategy.

This is the fundamental bet your company makes every year, whether they know it or not. Why not make it a deliberate positioning experiment that you run each year, say, instead? That way, you can methodically define and update your positioning, and perhaps even your product direction, pricing, and go-to-market strategy, too, in a coherent way.

To make coherent bets, though, you need a coherent strategy. And that brings us to the four strategies of science-based positioning.

First published: · Last updated: