Discussion about this post

User's avatar
Scott Davis's avatar

Excellent and thought provoking ideas, as usual, Donald. The discussion reminded me of another we've had many times over the years, regarding whether and to what degree we trust data vs. trusting the person who provides the data. For the current post, I see a similar thread woven throughout:

The context of an observation is essential to the interpretation/meaning of that observation.

When we receive observations from a person, we load those observations with tremendous amounts of metadata … about the observer: our history with them, our knowledge of their habits and biases, their accounts of recent and distant experiences, their demonstrated skills and blind spots, their habits, their recent travels or work projects, their friends/teachers/collaborators/etc, their worldviews, their value systems, their analytical preferences and tools, etc etc etc

We rarely assume universality of their info, even when regard them as a reliable and trusted source of information. We rarely need to make that assumption, because their info is so enriched with the trove of metadata.

If I say that “it will touch 30 degrees today”, my friends will know that I mean cold/Fahrenheit because I am hiking around Mount Blanc at 3k meters. 30 as data means very little compared to the trove of contextual metadata.

I think this asymmetry in value between data and metadata has something to do with the myth of objectivity as commonplace. Useful, meaning-rich information about reality is most often wildly subjective — tied to our peculiar experiences, to the uniqueness of those moments and of us, etc. This inherent subjectivity that is so commonplace in the useful, meaningful information we encounter helps explain why tacit metadata is so helpful in disambiguating the info we share with each other. But we have no such metadata context for an ai source.

The final thought that your piece generated in me today (on a walk, no less) is that there is a powerful hack for our tendency to interpret info as deterministically reliable: make the confidence window explicit.

In my analytic and strategic work with clients over the last few years, I found that asking them specifically. “how certain do you want to be that your decision is correct?“ Is a very powerful tool for moving them out of the mode of thinking that every piece of information they see is either absolutely reliable or absolutely not. The question reframes their cognitive process to move them into a grayscale universe, which happens to be the sort of universe in which we actually live. I find that the quality of thinking produced by every person after this question is superior to the quality of thinking before this question, irrespective whether the person is quite ordinary in mental capability or quite sophisticated.

Perhaps this could help us with your concerns in the way we interact with AI. I suspect some interesting and productive interactions might be created by asking AI the following questions about its responses to our inquiries:

1. How sure are you that your assertion of X is correct?

2. What are 2-3 alternative perspectives/answers to my original question?

3. Looking across these possible answers, and considering the unique aspects of the contexts in which each is more likely to be preferable, and considering the context within which I am asking the question, rank order each one based upon the probability that it is superior to the others in the context in which I am asking the question.

4. What aspect of my context, if changed, would cause you to rate a different one of the possible answers as the most reliable for me? Explain why you assert that these changes in context require a change in answer.

Expand full comment

No posts