8 Comments
User's avatar
Deborah Pascoe's avatar

Yeah, nah...I think I'll stick with my extraordinarily well-developed skill of abject denial

Expand full comment
Mark Sewell's avatar

Re your defense of a patient strategist, I’ve always thought the answer to every strategic decision is Yeah, Nay or Delay, but is constant delay and lack of decisions the most frustrating rot to destroy most businesses (or nations)?

A couple of orgs I know have had leadership change recently and the change in pace of decision making is cited as a nice reprieve for a little while till the lack of results eats up the passion people have for working there. Note to self: Do the DD, but then get moving, as movement may just be the oxygen that fans the flames of productivity?

Anyway, thanks Andrew for another good mag.

Expand full comment
Andrew Hollo's avatar

Interesting reflection on how speed and change in leadership intersect, Mark. I'm frequently working with new CEOs (it often triggers a desire to re-set strategy), and I've noticed one of two reflections:

1. "We're too slow, too passive" (the new CEO wishes to accelerate decisions, action and 'time to results')

2. "We act too rapidly, without thinking things through" (the new CEO wishes to slow things down, go back to first principles, and work out criteria for the best decisions)

Getting BOTH just right is the Goldilocks moment, of course.

Expand full comment
Sharon's avatar

AI should be designed to amplify feedback rather than filter it — making us more transparent and answerable, not less. If you simply take its responses at face value and present them to the world, you’re using AI as a front. But if you use it to magnify voices, track commitments, and make progress visible, then it becomes a real tool for accountability.

Expand full comment
Andrew Hollo's avatar

Yes, agree, Sharon. I've done a little experiment with my clients. I've prepared a summary of a discussion -- and I've got AI to prepare one. I've shown the client BOTH, and asked, "Which of these would you be MORE willing to stand accountable to?"

My example is only a sample size of n=3, but each time, they can detect the 'human' version. The nuance it captures makes it MORE likely that they'll be accountable to it.

Expand full comment
Deborah Pascoe's avatar

A relatively minor concern, in the bigger scheme of how AI might shape our world and our humanity, is the sycophantic nature of the beast. When I asked ChatGPT about my blind spots, I was firstly praised for such a “great, courageous question”. Then I felt kinda chuffed… am I supposed to like my blind spots? Or is this a blind spot in itself?

Expand full comment
Andrew Hollo's avatar

Good point Deb - You can ask it NOT to be sycophantic. Tell it, "I don't need your validation. I need your objectivity". You might NOT like what you hear, though . . . .

Expand full comment
Deborah Pascoe's avatar

Yeah...nah, I think I'll stick with my long term and favourite blind spot - denial :)

Expand full comment