2 Comments
User's avatar
Philip Zeng's avatar

Hi Shrivu, cool article! In agreement that systems thinking is important for managing both people and AI, but think the examples provided set the expectations for human competence pretty low. For Case 1, is the brand new data analyst really not going to ask their manager which dataset they should be using so that they can get to work on their projections ASAP? For Case 3, is a competent recruiter really never going to clarify what positions they're hiring for in the first place, especially over a 2 week period? For me, the ability to ask clarifying questions independently is an important distinction between humans and AI, would be interested to hear your thoughts on this

Expand full comment
Shrivu Shankar's avatar

Thanks! You are right (and the examples are definitely contrived but hopefully not completely unbelievable) but I would argue these clarifications all fall under an LLM's "default calibration" limitation rather than a more fundamental one.

Specifically "clarifying questions independently is an important distinction between humans and AI" is something that I would expect to reduce to effectively zero with the right manual calibration (e.g. "Do not continue if unsure and ask follow up questions if needed") and future improvements to the models. For me there is some abstract threshold between when to ask questions vs finesse and LLMs aren't yet fully aligned by default to human expectations.

Expand full comment