When the Unseen Obscures the Seen
My friend Chrisman Frank yesterday forwarded me this snippet from Naval about what he sees as a wrongheaded approach to pursuing “artificial general intelligence” (AGI). The wrong idea, Naval says, is that to achieve something artificial worthy of being called intelligent, we simply need to “add more compute”. The assumption being we have all the basic stuff we need now, just not enough of it (in a short enough amount of time) to see the kind of intelligence we see in humans, for instance.
There are many paths we could traverse from here to question and critique this idea. For instance, it’s not at all clear to me that “AGI” is a viable concept. In fact, it’s not clear to me that “general intelligence” is either — artificial or otherwise. It seems to me that “intelligence” is contextual: the Tesla is faster on the smooth road but the goat is faster over the rocky mountain. But that’s an issue for another time.
What Naval points to as the key difference between our “AI” products such as GPT-3 and human intelligence is the human propensity to explain the seen by the unseen.
The Seen by the Unseen
This practice of offering explanations for things as a means of understanding them is developed by David Deutsch. In his framework, the sign of a good explanation is one that can’t be varied arbitrarily — that is, if you changed some aspect of the explanation, which describes the unseen, it would no longer sufficiently explain the phenomenon, the thing that is seen. Conversely, if an explanation can be arbitrarily changed and still explain the same thing, then it’s not an explanation that can lend much insight nor leverage.
This propensity to develop explanations of the seen via the unseen, Naval asserts, is the sine qua non of human intelligence.
This is clearly something we do as humans. And whether or not this is the cause of our distinct ways relative to other creatures or is a symptom of it, it has given us immense power. Technological power in particular.
But power and risk are inseparable.
The ways in which wielding technological power carries risk is fairly obvious, and could be expanded upon at length (and has been in many forums). But there is another risk associated with this power.
Humans are so primed to explain seen by unseen, that often they are not sure which of the two they are interfacing with, and can be seduced into becoming blind the seen when it conflicts with their understanding of the unseen.
It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So. — Mark Twain (apocryphal)
Not only can our explanations be wrong, but they can be wrong and COMPELLING. When dealing with complexity one faces this issue constantly: there are strongly held beliefs that certain kinds of explanations suffice, even when they fail to deliver over and over again.
In the development of software, for instance, there is the persistent belief that one can set out all the requirements up front, carve them up into chunks, satisfy them independently and re-integrate them in a straightforward and seamless manner on the tail-end.
There is a lot that is not accurate about this picture, but for now it suffices for us to simply remark that is indeed not accurate, and leads to countless failures — empirically.
And, typically, accompanying each of these failures is a specific explanation of the failure framed via the details of that particular failure.
But what is visible, yet unseen, is the series of failures that this mode of development induces. What should be the seen becomes obscured by the unseen; by the explanation.
This issue is all around us. We dump more money into government programs when government programs fail. We double down on novel vaccines to mitigate contagion while simultaneously observing they don’t do a great job performing that function.1 We further centralize power while observing increasing tensions between diverse cultural expectations and norms. On and on.
Simply, we see what we want to see. And the unseen, the explanation, helps us do it. We insist on doing what we believe ought to work, despite watching it not work in front of our eyes. And the problem becomes worse with scale, precisely where it is more critical to avoid.
Power and risk are inseparable.
From the article (much more detail there but for flavor): “The study shows that people who become infected with the Delta variant are less likely to pass the virus to their close contacts if they have already had a COVID-19 vaccine than if they haven’t1. But that protective effect is relatively small, and dwindles alarmingly at three months after the receipt of the second shot.”