1 Comment
User's avatar
Scott Davis's avatar

Thought provoking, as usual. Thanks, Donald. AI from peripheral vision

When we think about how learning happens, I am struck by the sense that there are powerful factors af play which do not pertain directly to the subject and object of such learning. For examples, I think the questions of values, preferences, desires, incentives, penalties, evaluation, feedback, error correction, etc are salient if we want to understand how human learning happens. Without addressing these ideas directly, we are left with the profound question "what pressure or forcing functions create an avoidance response to error and a reward-chasing response to expertise/knowledge/truth?" Or "how does one arbitrate between alternative choices that cannot be fully disambiguated -- how much certainty is the right amount for a particular decision?" "Why does it matter to me if my answer is right or my decision defensible?" Much of the discussion of ai focuses on the mechanics of how a decision is chosen, but I am very interested in the question "why" -- that is, the way this question's experience fits into a larger context of how the answerer is forging a path toward something it values, a path in which right answers and good knowledge are helpful to the answerer -- and bad answers have consequences for it as well. This is a powerful factor for kids learning chess or adults learning to be nurses. This is a sideward glance at the ai questions we are all discussing, but in a dark room sometimes indirect vision registers forms that direct vision cannot yet capture. If we say it does not matter for ai, we should abandon the pretense of analogizing between ai learning and human learning.

Expand full comment