top of page

The Right Way for an Established Firm to Do an Innovation Pilot with a Startup

For innovation-hungry legacy firms, partnering with a startup can be appealing. Relatively small sums of time and money can quickly yield generous returns. With due diligence and decent design, these partnerships can go beyond good results and energize organizations that have become too comfortable or complacent with everyday routines. In return, the startups typically get valuable references or valued customers.

“Startnership” success, however, is all too rare. The majority of aspiring start-up partnerships I see go horribly and expensively wrong. Mismanaged expectations, blown budgets, slipped schedules, and mutual contempt run rampant. With apologies to Tolstoy, all happy start-up partnerships are alike; every unhappy innovation partnership is unhappy in its own way.

The key common denominator to success isn’t careful planning and comprehensive analysis, but taking fast, cheap, and simple pilots seriously. You’re probably familiar with the “minimum viable product” of Eric Ries’ Lean Start-Up fame; but here I’m talking about the acronymically identical “minimum viable pilot.” That is, targeted tests designed to deliver unambiguous insight into business value. Think “lean pilots.”

Where minimum viable products, as Ries defines them, are versions of new products that allow teams to collect maximum amounts of validated learning about customers with the least amount of effort, minimum viable pilots represent the fastest, simplest, and cheapest test of a desirable “innovation attribute” to determine — with the least possible effort — its most likely business value to the firm.

Minimally viable products iteratively inform product and user experience design; minimally viable pilots generate scalable insights into getting the most bang for the buck from key innovations. While the learning emphasis is similar, the desired outcomes are not. Successful pilot partnerships favor fast focus over comprehensive planning. They don’t seek to assess how well an innovation works; they try to measure how well that innovation works for us.

One clever network startup discovered this the hard way. Its young engineers had come up with a technical solution that dramatically reduced downloading delays for users accessing data for their apps. The technology held appeal for network engineering departments at both telecom providers and digital service providers. The catch was that the startup had built a comprehensive solution to the latency challenge, while the engineering groups wanted to be able to test the approach on their two or three most troublesome use cases. The telecoms didn’t have the time or interest to deal holistically with delays; they simply wanted to assess which elements of the startup’s innovation would make the worst pain points disappear.

“Even though people said they liked what we we’re doing, we didn’t get any traction until we re-architected our offering,”

the startup co-founder ruefully acknowledged.

“We got our first trial when we could let customers pick and choose which features they wanted to test.”

A fintech start-up enjoyed greater success piloting a security offering with a global bank. The startup initially offered a similarly comprehensive approach, but the bank — driven by unhappy customer feedback — was interested in improving a particular aspect of the online user experience around security. With only a little extra engineering and handholding, the fintech came up with a simple suite of testable configurations that plugged into the bank’s systems. The startup got an eye-opening inside view of how a bank’s IT shop actually launched new services and the bank realized the fintech’s approach to security management would prove superior to its own. Quickly focusing on a particular issue yielded actionable insights more quickly and less expensively than using a broader approach.

The takeawayminimum viable pilots work because they don’t prioritize sales or procurement over real-world learning. Piloting with startups should be about demonstrating value, not closing deals. But that value can’t — or shouldn’t — be incremental, because otherwise, the pilot literally isn’t worth it.

Consequently, pilots shouldn’t be about buying systems or solutions; they’re about creating the buy-in that makes smart procurement economically possible. These experiments are pre-procurement.

Digital technology — and its post-industrial pervasiveness — clearly and convincingly disrupts the legacy economics of innovation partnerships. Between software-as-a-service platforms, APIs, and 3D/desktop manufacturing tools, collaborative value creation opportunities have never been faster, easier, or cheaper. That’s why a minimum-viable approach has increasingly become integral not just to design and development of products, but to procurement and deployment processes.

The most interesting tension I consistently witness in these startup-legacy partnerships is the battle between the business and technical sides of the legacy partner. Technical people hope the pilot effort will seamlessly interoperate with existing tech infrastructures and processes, while business folks look for efficiencies, better customer experience, and KPI alignment.

More often than not, however, these tensions prove remarkably valuable. Both sides of the firm are pushed to collaborate faster to get the trial done well. Pilots always surface previously unknown or unanticipated enterprise issues. Fortunately, they tend to be manageable. Smart startups (the ones that survive, anyway) learn how to facilitate innovation adoption, not just sell new products and services.

This sharply contrasts with typical legacy reactions to more comprehensive testing or innovation rollouts, i.e., resistance and confusion. Legacy firms striving to build innovation alliances typically discover that cultural and organizational obstacles matter far more than technical or financial ones. Pilots can help facilitate cultural and organization change.

In this respect, the key to a pilot’s success is its very lack of ambition. Precisely because it doesn’t try to do too much, it appears less risky, less threatening, and less disruptive. Pilots become the least disruptive way for enabling disruptive innovation partnerships. They make collaborative learning simpler, safer, and more scalable.

The minimum viable pilot “startnership” checklist is straightforward:

  1. Derisk it. Make the pilot Hippocratic; first, do no harm. Data should be protected. There should be minimal interference with and/or demands made upon ongoing systems and processes. Pilots should be quick and easy to shut down. They should create no novel vulnerability. Smart legacies should offer pilot risk principles and guidelines to startups wanting to partner with them.

  2. Remember that less is more. Ask yourself, “What is the single most important insight to acquire or valuable proposition to test?” Not the top two or the top three; the mostimportant. Ambition is the enemy. Iteration is the aspiration. Getting startups and legacies to agree on that one thing that would really move the needle always clarifies, purifies, and sharpens. Scope creep guarantees burdensome complexity and delay. Moreover, startups are typically under-resourced relative to the demands of their legacy partners. Focus and specificity strips away the usual startup excuses and highlight accountability.

  3. Explicitly align any insights gained to KPI improvements. Startups and legacies may ultimately and understandably want to learn different things from an innovation but — fundamentally — MVPs must be designed to deliver a clear measure around a desired outcome. Typically, that measure is — or directly feeds into — a KPI (key performance indicator). Whether that KPI is a Net Promoter Score or faster service or lower cost doesn’t matter. What matters is that everyone in the partnership clearly understands how and why the MVP did — or did not — deliver that KPI improvement.

  4. Know what’s next. Did the pilot enhance that KPI? Congratulations — what’s the next experiment? Does the next iteration help scale up the insight or bring another innovation attribute along for the ride? Should the next pilot double-down on what’s been learned or suggest a complementary approach? For example, one startnership pilot successfully got a business unit to write a simple but powerful analytic through its API. Instead of doing another pilot with an even more sophisticated API, the legacy and startup agreed to market the initial API throughout the firm. The startup learned how to broaden its intra-enterprise reach, and the legacy firm realized it was more concerned about promoting analytics throughout the firm, rather than just improving the capabilities of a key business unit. In increasingly data-driven, digital, and iterative business environments, pilots live as ongoing processes, not singular or discrete events. Innovation partnerships assume a distinct “devops” flavor and aftertaste. That is, even pilot failures offer up actionable insights for what to do next — even if it’s not with one’s original “startnership” partner.

  5. Make everyone look good. Technically, a well-defined and well-designed MVP never fails. The benefits of learning — the experience, the engagement, the exposure to technical innovation and organizational insight — should clearly outweigh the financial and opportunity costs. The smart organizations I know consistently wring great value from their expanding pilot portfolios. One industrial giant, unhappy with its human resources software vendor, did a quick and dirty pilot startnership around a particularly annoying limitation of its existing system. The pilot yielded no suitable alternative or supplementary value, but identified a feature that allowed the legacy to challenge its vendor to add some important features. The ploy worked. A “failed” $50,000 MVP led to a UX upgrade well worth six-figures. The startup turned its ostensible shortcoming into a usable insight in its own technical and sales development.

Indeed, one clear sign of pilots’ cultural influence and success is when even “failed” tests elicit the comment,

“Imagine how much more time-consuming/expensive/wasteful this would have been if we had tried to do more.”

One of the great — and welcome — ironies of digital transformation is that disproportionate impact can so quickly emerge from seemingly tiny steps. Even better, a little learning with little start-ups can profitably redefine how incumbents can co-pilot new value.

6 views0 comments


bottom of page