Discussion about this post

User's avatar
Dave Hallmon's avatar

The "consultant's error" of a polished presentation masking a fundamentally wrong recommendation we do see too often.

For human checkpoints at every stage of the workflow, how do you see that best translating into a new AI systems especially when leaders are wanting to see AI save time and reduce human oversight?

Dave Hallmon's avatar

The AI's ability to construct a "watertight, multi-framework, confidently delivered argument for the wrong answer" is a vivid example of the 'hallucination' risk I teach teams to guard against.

1 more comment...

No posts

Ready for more?