Great article Shrivu! What do you think will be the effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build? I mean you can’t just tell it that I want to hit $1B ARR with a cybersecurity product, go! So it needs to be broken into sub-outcomes (desirable feature?) and maybe sub sub outcomes (desirable component?).
- Will you be able to just prompt to $1B ARR? Definitely not but I could see that if you took the 2026 state of the art deep research + model outputs into the early 2010s (as a hypothetical timetraveler) you might actually get pretty far.
- The effort probably shifts to unique industry/customer insights, bets/luck, and existing trust (i.e. brand). With a large # of folks who can now prompt the same, there's going to be some "alpha" that still exists to between AI on it's own an AI + "expert" owner. In some sense the explicit answer to "effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build" is the key secret sauce that will differentiate.
- I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
- The shape of an AI-integrated SaaS company probably also looks very different. It's weird to think of AI training a GTM team, doing customer interviews, designing tech stacks, etc. This new shape (idk yet but interesting to think about) will more effectively take these subgoals and translate that into product/sales/marketing/sub-strategy changes.
> I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
Specification in greater detail, to me, isn't so much about the agent being unable to execute the task but more about the agent executing it in a way that achieves the outcome as I intended it (maybe this is covered by alignment, idk)
i.e. the statement "build me a company that gets me to $1B ARR" can only contain that much information by necessity of its concision
Maybe this is semantics, depending on how we see "sub-outcomes" - either as fleshed out versions of the outcome, or as a part of the process. The former interpretation feels desirable and the latter does not
1. I'm trying to concretize my own opinions about what timelines are going to look like, but "AI coding is here to stay" resonated - I think even in its current form, with no improvements, AI coding tools are already making engineers so much more effective, so it decidedly seems like the genie's out of the bottle
2. The distinction between High Stakes Adapters and Low-Stakes Adapters is not clear to me. Specifically, I get the idea at a high level, but I'm not personally able to come up with a set of characteristics that would make a role fall into either 1 category or the other (basically i don't know how to break down '"High-stakes adapters are required to maintain full-competence in the role that AI is replacing and be able to perform the full task “offline”' further - curious if you have more fleshed out thoughts here?
The natural progression is that with AI tools becoming more and more powerful most folks will no longer need to know how to perform the underlying skill. At some near future inflection point, an "engineer" who can "prompt" will be considered more valuable than one who is purely good at coding. These are low stakes adapters.
I do however think that there will be certain critical roles where the coding (or some other underlying ability) skill will is still valuable. To me pilots are core example of a high stakes adapter. Even if AI is just as good at flying the plane there's a reason to maintain a fully skilled pilot for edge cases and emergencies.
In both cases AI gradually catches up but the core difference is high stakes adapters must be experts even in the area AI excels.
This all makes sense, but maybe to rephrase my original question - what makes a role one that requires "high stakes adapters" in your opinion? If you took a basket of roles in that category (of which Pilot is 1 example), what are the commonalities between the roles in that basket?
High-stakes adapter roles typically share these common characteristics:
1. High Cost of Failure
- These roles center on human safety, critical infrastructure reliability, or high-value economic processes. A failure here could directly result in large-scale, irreversible harm (catastrophic damage, loss of life, significant financial costs).
Examples from aviation, autonomous vehicles, nuclear oversight, or medical procedures nicely illustrate this.
2. Legal, Ethical, or Regulatory Requirements
- Roles that have explicit legal obligations, compliance mandates, or formal oversight expectations (like aviation regulations requiring trained pilots to maintain manual-flight skills, medical professionals compelled to intervene manually for ethical/legal reasons) become natural candidates.
These are areas where society explicitly (legally or ethically) demands human oversight.
3. Irregular, Edge-Case, or "Tail-event" Situations
- Roles where there's a meaningful probability of encountering rare, unpredictable situations that existing AI systems could struggle with (even if these AI systems are highly competent in 99.9% of scenarios). Despite AI's general competence, maintaining human readiness at full expertise becomes critically important exactly because the circumstances that demand immediate human intervention are rare, unusual, and severe.
4. Public (or Customer) Trust and Accountability
- Roles whose acceptance and perceived reliability explicitly rest on public trust in a human's skill (rather than trust solely in algorithmic or automated decision-making). A societal or psychological aspect clouding public perception can strongly guide categorizing positions as "high-stakes adapters" because the role must tangibly demonstrate human judgment and human accountability (even if only symbolically or reassurance-wise).
In contrast, low-stakes adapter roles are distinguished by the relative ease and safety of AI-driven trial-and-error, lower costs of mistakes, reduced regulatory and safety pressures, ease-of-fixing AI-generated errors, and a societal comfort with automation even at the cost of occasional mistakes or inefficiencies. For example:
- Software Engineering for general SaaS apps where occasional bugs from LLM-generated code aren't catastrophic
- Marketing or content writing for web applications, where suboptimal outputs aren't existential threats
- Knowledge-worker tasks whose deliverables are less regulated, and the incremental costs of small failures are negligible or easily addressed
A synthesized example "basket" of high-stakes adapter roles might include:
- Airline pilots
- Surgeons and emergency medical personnel
- Nuclear and critical-infrastructure operators
- Autonomous vehicle safety specialists or human supervisors
- Military command or safety-critical systems engineers
I'm grateful to see I've been working on the uncomfortable skills for most of my career. Effectiveness breeds efficiency, yet culture eats strategy. Most companies are unaware of the culture they really have, that's preventing strategy execution.
Great article Shrivu! What do you think will be the effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build? I mean you can’t just tell it that I want to hit $1B ARR with a cybersecurity product, go! So it needs to be broken into sub-outcomes (desirable feature?) and maybe sub sub outcomes (desirable component?).
Thanks! I'd say:
- Will you be able to just prompt to $1B ARR? Definitely not but I could see that if you took the 2026 state of the art deep research + model outputs into the early 2010s (as a hypothetical timetraveler) you might actually get pretty far.
- The effort probably shifts to unique industry/customer insights, bets/luck, and existing trust (i.e. brand). With a large # of folks who can now prompt the same, there's going to be some "alpha" that still exists to between AI on it's own an AI + "expert" owner. In some sense the explicit answer to "effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build" is the key secret sauce that will differentiate.
- I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
- The shape of an AI-integrated SaaS company probably also looks very different. It's weird to think of AI training a GTM team, doing customer interviews, designing tech stacks, etc. This new shape (idk yet but interesting to think about) will more effectively take these subgoals and translate that into product/sales/marketing/sub-strategy changes.
gpt4.5 had a decent answer for you as well: https://chat.sshh.io/share/j5GPHeFceCFoX8JYUOF6u
> I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
Specification in greater detail, to me, isn't so much about the agent being unable to execute the task but more about the agent executing it in a way that achieves the outcome as I intended it (maybe this is covered by alignment, idk)
i.e. the statement "build me a company that gets me to $1B ARR" can only contain that much information by necessity of its concision
Maybe this is semantics, depending on how we see "sub-outcomes" - either as fleshed out versions of the outcome, or as a part of the process. The former interpretation feels desirable and the latter does not
Thanks for sharing your thoughts!!
1. I'm trying to concretize my own opinions about what timelines are going to look like, but "AI coding is here to stay" resonated - I think even in its current form, with no improvements, AI coding tools are already making engineers so much more effective, so it decidedly seems like the genie's out of the bottle
2. The distinction between High Stakes Adapters and Low-Stakes Adapters is not clear to me. Specifically, I get the idea at a high level, but I'm not personally able to come up with a set of characteristics that would make a role fall into either 1 category or the other (basically i don't know how to break down '"High-stakes adapters are required to maintain full-competence in the role that AI is replacing and be able to perform the full task “offline”' further - curious if you have more fleshed out thoughts here?
The natural progression is that with AI tools becoming more and more powerful most folks will no longer need to know how to perform the underlying skill. At some near future inflection point, an "engineer" who can "prompt" will be considered more valuable than one who is purely good at coding. These are low stakes adapters.
I do however think that there will be certain critical roles where the coding (or some other underlying ability) skill will is still valuable. To me pilots are core example of a high stakes adapter. Even if AI is just as good at flying the plane there's a reason to maintain a fully skilled pilot for edge cases and emergencies.
In both cases AI gradually catches up but the core difference is high stakes adapters must be experts even in the area AI excels.
This all makes sense, but maybe to rephrase my original question - what makes a role one that requires "high stakes adapters" in your opinion? If you took a basket of roles in that category (of which Pilot is 1 example), what are the commonalities between the roles in that basket?
From GPT which did a great job:
---
High-stakes adapter roles typically share these common characteristics:
1. High Cost of Failure
- These roles center on human safety, critical infrastructure reliability, or high-value economic processes. A failure here could directly result in large-scale, irreversible harm (catastrophic damage, loss of life, significant financial costs).
Examples from aviation, autonomous vehicles, nuclear oversight, or medical procedures nicely illustrate this.
2. Legal, Ethical, or Regulatory Requirements
- Roles that have explicit legal obligations, compliance mandates, or formal oversight expectations (like aviation regulations requiring trained pilots to maintain manual-flight skills, medical professionals compelled to intervene manually for ethical/legal reasons) become natural candidates.
These are areas where society explicitly (legally or ethically) demands human oversight.
3. Irregular, Edge-Case, or "Tail-event" Situations
- Roles where there's a meaningful probability of encountering rare, unpredictable situations that existing AI systems could struggle with (even if these AI systems are highly competent in 99.9% of scenarios). Despite AI's general competence, maintaining human readiness at full expertise becomes critically important exactly because the circumstances that demand immediate human intervention are rare, unusual, and severe.
4. Public (or Customer) Trust and Accountability
- Roles whose acceptance and perceived reliability explicitly rest on public trust in a human's skill (rather than trust solely in algorithmic or automated decision-making). A societal or psychological aspect clouding public perception can strongly guide categorizing positions as "high-stakes adapters" because the role must tangibly demonstrate human judgment and human accountability (even if only symbolically or reassurance-wise).
In contrast, low-stakes adapter roles are distinguished by the relative ease and safety of AI-driven trial-and-error, lower costs of mistakes, reduced regulatory and safety pressures, ease-of-fixing AI-generated errors, and a societal comfort with automation even at the cost of occasional mistakes or inefficiencies. For example:
- Software Engineering for general SaaS apps where occasional bugs from LLM-generated code aren't catastrophic
- Marketing or content writing for web applications, where suboptimal outputs aren't existential threats
- Knowledge-worker tasks whose deliverables are less regulated, and the incremental costs of small failures are negligible or easily addressed
A synthesized example "basket" of high-stakes adapter roles might include:
- Airline pilots
- Surgeons and emergency medical personnel
- Nuclear and critical-infrastructure operators
- Autonomous vehicle safety specialists or human supervisors
- Military command or safety-critical systems engineers
- Cybersecurity experts managing extremely sensitive infrastructure
- Mission-critical financial trading or economic-forecast professionals
- AI security and auditing experts who insist on human oversight and final interpretability judgments
All these roles share the combination of:
- Severe consequences and large-scale impacts if things go wrong
- Complex edge-case scenarios demanding urgent interventions at expert level
- External societal/legal regulatory factors mandating human oversight
- Trust and accountability that inherently require or benefit greatly from the presence of fully competent human oversight
That's a principled breakdown of the implicit reasoning found in the original author's post.
https://chat.sshh.io/share/OQhTkxxyUuposA8t3DoLl
I'm grateful to see I've been working on the uncomfortable skills for most of my career. Effectiveness breeds efficiency, yet culture eats strategy. Most companies are unaware of the culture they really have, that's preventing strategy execution.