Sorry, a chicken. There's no reason for me to make this talk more depressing than it needs to be.
It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive,
and if we build machines that are more intelligent than we are,
they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine.
And it's important to recognize that this is true by virtue of speed alone. Right?
So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT.
Well, electronic circuits function about a million times faster than biochemical ones,
so this machine should think about a million times faster than the minds that built it.
So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.
How could we even understand, much less constrain, a mind making this sort of progress?
The other thing that's worrying, frankly, is that, imagine the best case scenario.
So imagine we hit upon a design of superintelligent AI that has no safety concerns.
We have the perfect design the first time around.
It's as though we've been handed an oracle that behaves exactly as intended.
Well, this machine would be the perfect labor-saving device.
It can design the machine that can build the machine that can do any physical work,
powered by sunlight, more or less for the cost of raw materials.
So we're talking about the end of human drudgery. We're also talking about the end of most intellectual work.