Great article Shrivu! What do you think will be the effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build? I mean you can’t just tell it that I want to hit $1B ARR with a cybersecurity product, go! So it needs to be broken into sub-outcomes (desirable feature?) and maybe sub sub outcomes (desirable component?).
- Will you be able to just prompt to $1B ARR? Definitely not but I could see that if you took the 2026 state of the art deep research + model outputs into the early 2010s (as a hypothetical timetraveler) you might actually get pretty far.
- The effort probably shifts to unique industry/customer insights, bets/luck, and existing trust (i.e. brand). With a large # of folks who can now prompt the same, there's going to be some "alpha" that still exists to between AI on it's own an AI + "expert" owner. In some sense the explicit answer to "effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build" is the key secret sauce that will differentiate.
- I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
- The shape of an AI-integrated SaaS company probably also looks very different. It's weird to think of AI training a GTM team, doing customer interviews, designing tech stacks, etc. This new shape (idk yet but interesting to think about) will more effectively take these subgoals and translate that into product/sales/marketing/sub-strategy changes.
I'm grateful to see I've been working on the uncomfortable skills for most of my career. Effectiveness breeds efficiency, yet culture eats strategy. Most companies are unaware of the culture they really have, that's preventing strategy execution.
Great article Shrivu! What do you think will be the effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build? I mean you can’t just tell it that I want to hit $1B ARR with a cybersecurity product, go! So it needs to be broken into sub-outcomes (desirable feature?) and maybe sub sub outcomes (desirable component?).
Thanks! I'd say:
- Will you be able to just prompt to $1B ARR? Definitely not but I could see that if you took the 2026 state of the art deep research + model outputs into the early 2010s (as a hypothetical timetraveler) you might actually get pretty far.
- The effort probably shifts to unique industry/customer insights, bets/luck, and existing trust (i.e. brand). With a large # of folks who can now prompt the same, there's going to be some "alpha" that still exists to between AI on it's own an AI + "expert" owner. In some sense the explicit answer to "effort needed to articulate the desirable outcome of a SaaS product well enough for the AI to build" is the key secret sauce that will differentiate.
- I hesitate to say we'll need to breakdown things (even outcomes) for a superintellegent system, probably only to the extent needed to achieve the point above.
- The shape of an AI-integrated SaaS company probably also looks very different. It's weird to think of AI training a GTM team, doing customer interviews, designing tech stacks, etc. This new shape (idk yet but interesting to think about) will more effectively take these subgoals and translate that into product/sales/marketing/sub-strategy changes.
gpt4.5 had a decent answer for you as well: https://chat.sshh.io/share/j5GPHeFceCFoX8JYUOF6u
I'm grateful to see I've been working on the uncomfortable skills for most of my career. Effectiveness breeds efficiency, yet culture eats strategy. Most companies are unaware of the culture they really have, that's preventing strategy execution.