3 Comments
User's avatar
Dave Hallmon's avatar

Sharp title, I’m sure this will get more reads. The title pulled me from my doom scroll and actually delivered. Not often the case here on Substack!

Dave Hallmon's avatar

The AI's ability to construct a "watertight, multi-framework, confidently delivered argument for the wrong answer" is a vivid example of the 'hallucination' risk I teach teams to guard against.

Dave Hallmon's avatar

The "consultant's error" of a polished presentation masking a fundamentally wrong recommendation we do see too often.

For human checkpoints at every stage of the workflow, how do you see that best translating into a new AI systems especially when leaders are wanting to see AI save time and reduce human oversight?