At first, I did not grok the adjacency of the two topics woven into this post: competition/collaboration and the reductive conception of intelligence. But, as usual, you got me thinking more deeply and more wanderingly. That wandering brought me to the potential intersection of (1) Godel's Second Incompleteness Theorem and (2) general theory of self-knowledge or self-monitoring, specifically the question of whether a sufficiently sophisticated system will always lack the ability reliably to self-diagnose, self-monitor, or to put another way... to understand how its understanding works. And I think there is a good chance Godel's proof provides a useful criterion for thinking about sophisticated systems - such as intelligences of the organic and non-organic kind. I sense that Godel's theorem reveals that isolated intelligence is a contradiction in terms due to the impossibility for such an intellgence to self-validate, self-diagnose, self-monitor... to have perfect mastery of the system itself by using only the same system. And this limitation drives directly to your twin topics, because (1) any contruction of intelligence, whether AI or other, that CAN self-monitor/describe/know/validate reliably must be necessarily reductive, and therefore (2) all reliable complex intelligence must be collaborative. Paradoxically, this suggests that when we have finally arrived at a synthetic intelligence of similar complexity to our own, if we ask it how it works, it will say something like "I am not quite sure." And then ask a synthetic friend for its thoughts.
At first, I did not grok the adjacency of the two topics woven into this post: competition/collaboration and the reductive conception of intelligence. But, as usual, you got me thinking more deeply and more wanderingly. That wandering brought me to the potential intersection of (1) Godel's Second Incompleteness Theorem and (2) general theory of self-knowledge or self-monitoring, specifically the question of whether a sufficiently sophisticated system will always lack the ability reliably to self-diagnose, self-monitor, or to put another way... to understand how its understanding works. And I think there is a good chance Godel's proof provides a useful criterion for thinking about sophisticated systems - such as intelligences of the organic and non-organic kind. I sense that Godel's theorem reveals that isolated intelligence is a contradiction in terms due to the impossibility for such an intellgence to self-validate, self-diagnose, self-monitor... to have perfect mastery of the system itself by using only the same system. And this limitation drives directly to your twin topics, because (1) any contruction of intelligence, whether AI or other, that CAN self-monitor/describe/know/validate reliably must be necessarily reductive, and therefore (2) all reliable complex intelligence must be collaborative. Paradoxically, this suggests that when we have finally arrived at a synthetic intelligence of similar complexity to our own, if we ask it how it works, it will say something like "I am not quite sure." And then ask a synthetic friend for its thoughts.