Donald, this post is instructive for all of us, whether working on computer intelligence or on our own old-fashioned organic intelligence. You’ve highlighted a key categorical boundary on the continuum of decision-making: Approach/Avoid response vis-a-vis the Novel.
First-order decision-making (whether silica or carbon) starts with rules that relate an objective to the norm or to the familiar. At best it ignores the unfamiliar, and often, as in your example, it will reject the unfamiliar. It is primed to AVOID anything new, unusual, or unfamiliar.
Higher-order decision-making is based upon learning, which requires curiosity about the stuff beyond the familiar that might be related to the objective. Curiosity-driven learning sees opportunity in the unusual, leading to an APPROACH instinct.
So, to move beyond the primitive level of decision-making (in ourselves and in our synthetic progeny) we need to develop a bias toward the novel -- an APPROACH bias -- as captured in one deceptively powerful little sentence:
We have spent so much time building applications that are definitive and absolute, that curiosity, tentativeness or humility rarely appear in our design and development processes. I think this will lead to some very serious failures in AI, if we don't start thinking about how to build more uncertainty and inquiry into our systems.
This is a wonderful story and a great example of learning and applying lessons, of iterative decision making, and, given that for now at least, metaphor, humility and deep engagement are beyond the computational statistics that we hype as AI, the abiding need for humans to make the decisions beyond the automated process.
Donald, this post is instructive for all of us, whether working on computer intelligence or on our own old-fashioned organic intelligence. You’ve highlighted a key categorical boundary on the continuum of decision-making: Approach/Avoid response vis-a-vis the Novel.
First-order decision-making (whether silica or carbon) starts with rules that relate an objective to the norm or to the familiar. At best it ignores the unfamiliar, and often, as in your example, it will reject the unfamiliar. It is primed to AVOID anything new, unusual, or unfamiliar.
Higher-order decision-making is based upon learning, which requires curiosity about the stuff beyond the familiar that might be related to the objective. Curiosity-driven learning sees opportunity in the unusual, leading to an APPROACH instinct.
So, to move beyond the primitive level of decision-making (in ourselves and in our synthetic progeny) we need to develop a bias toward the novel -- an APPROACH bias -- as captured in one deceptively powerful little sentence:
"I wonder what that is about."
We have spent so much time building applications that are definitive and absolute, that curiosity, tentativeness or humility rarely appear in our design and development processes. I think this will lead to some very serious failures in AI, if we don't start thinking about how to build more uncertainty and inquiry into our systems.
This is a wonderful story and a great example of learning and applying lessons, of iterative decision making, and, given that for now at least, metaphor, humility and deep engagement are beyond the computational statistics that we hype as AI, the abiding need for humans to make the decisions beyond the automated process.