AI and Exponential Growth

Robert Gordon is dismissive about the possibilities of productivity growth through AI.  Tyler Cowen, Kevin Drum, and Matthew Yglesias are all indignant.  This is a rare subject where I probably know more about the issue than any of the four much more famous people, so let’s do a deep dive.

Basically the argument that Yglesias and Drum make is that while right now, AI isn’t really smart enough to automate the overwhelming majority of human processes, exponential growth of computing capacity will (soon!) remove these limitations, allowing much more competent AI and/or robots.  This relies on the steep growth of exponential curves.

So there’s a story about chessboards and grains of rice, but I like Kevin Drum’s nicely illustrated bit about filling Lake Michigan with drops of water (scroll down a bit to the animated picture), or just something that a math book suggested to me about allowances when I was small.  The allowance concept is this:  Ask your parents for a 1 penny a day allowance, and then for the penny to be doubled each day in the month.

The progression is:

Day 1: $0.01.  Day 2:  $0.02.  Day 3: $0.04.  Day 4: $0.08.  Day 5: $0.16.  Day 6: $0.32.  Day 7: $0.64.

Now, at this point, your allowance looks pretty pathetic.  It is cumulatively $1.27 in a week.  But wait:

Day 8: $1.28, Day 9: $2.56, Day 10: $5.12, Day 11: $10.24.  Day 12:  $20.48.  Day 13: $40.96.  Day 14: $81.92

Okay, now things are looking pretty different.  Your two week take is $163.83.  And you’re only halfway done!

Day 15: $163.84, Day 16:  $327.68, Day 17: $655.36, Day 18: $1310.72, Day 19: $2621.44, Day 20: $5242.88, Day 21:  $10485.76

Your three week take is $20,971.51!  And the fourth week, of course, is much, much bigger than all the previous weeks combined.

Day 22: $20971.52, Day 23: $41943.04, Day 24: $83886.08, Day 25: $167772.16, Day 26: $335544.32, Day 27: $671088.64

So at one day before the end of your month (what?  It’s February), you’ve got a cumulative total of $1,342,177.27.  And then your final day doubles that with another $1,342,177.28, leaving you at more than $2.6 million.  Your parents are probably going to break this deal.

This is starkly unintuitive.  I mean, you probably know that exponential growth is fast, but it’s hard to wrap your mind around how fast.  You look at $1.27 on your first week, and you think, “Okay, but this is quickly increasing, I might make thousands of dollars!”  But in fact you make millions.

Applying this to AI, here’s what Yglesias and Drum are getting at:  assuming that computer power is in fact increasing exponentially, it’s easy to look back at the era from 1960 to 2000 of “people saying AI is just around the corner, but actually it’s not,” and then the era of 2000 to 2014 of, “Watson wins at Jeopardy!, and people talk about robot cars, but you still don’t see any of it in your life,” and conclude that it’s all hype.  But if where we are is day 14 of the exponential allowance, then day 15 beats all of days 1-14 put together!  And by day 21, we’ve gone from under $200 to more than $20,000!  And more orders of magnitude await.  It’s just as unintuitive with AI, they argue.  We look at 50 years of not-very-much progress and disregard a couple of minor recent successes, but actually, it’s going to snowball into incredible developments, so soon.

Well, maybe.

It’s possible that they’re right.  But there are some assumptions here that we should be digging into.

First, let’s talk about where we are in the exponential growth of computing power.  The phrase you’re looking for here is Moore’s Law, which predicts, technically, a doubling of the number of transistors on a given sized integrated circuit approximately once every two years, and, informally, a doubling in the processing speed of computers once every two years or so.  It has so far held true.  But it won’t forever.  There are hard physical limits on the minimum size of a transistor on an integrated circuit, and we’re up against them.  We won’t get much more than one or two more doublings that way.  Other technologies (quantum computing, increased ease in building parallel algorithms, or other, more exotic stuff) may take over and give us the same computing power increase that Moore’s Law did, but if we look at Moore’s Law as it was originally conceived, we aren’t at Day 14 of our allowance, we’re at Day 26 or 27.

Now, one or two more doublings ain’t bad!  After all, that’s twice or four times as much computing power available to us!  But note that Watson’s hardware isn’t exactly consumer-grade, either.  It’s a $3 million machine.  Even if computing power available to us goes up by a factor of 4, or 8, or 16, we aren’t going to be able to put a Watson — or a next-gen, yet-more-powerful Watson — everywhere.  To have consumer-available AI, we’ll need something to take over for Moore’s law.

But that’s not that unlikely.  There are probably more than two doublings to squeeze out of processors yet.  I’m not sure there are 10 doublings.

Second, and more importantly, let’s talk about exponential problems.  Specifically, let’s talk about NP-Hard problems.  NP-Hard has a formal definition that is difficult to explain to people without a lot of computer science background, but the practical upshot of a problem that is NP-Hard is that as the size of the problem space grows linearly, the computing power needed to solve the problem grows exponentially.

As an example, let’s look at Watson’s predecessor, Deep Blue.  Deep Blue was a chess-playing computer that famously beat Gary Kasparov back in the mid 90’s.  The special sauce of Deep Blue was pretty simple: it was speed.  Deep Blue (in the mid-game — I’m eliding some complexity here) just looked at every possible move it could make, and then every possible response move that its opponent could make, and then every possible response move that it could make, etc.  It could evaluate about 200 million board positions per second.  Which meant that it could look ahead about 10 total moves.  That is, it could look at every possible combination of move and response for each player about 5 moves each.

What if Deep Blue wanted to look ahead 11 moves?  Well, that would be roughly 30 times harder.  What about 12 moves?  Roughly 900 times harder.  Thirteen moves?  27,000 times harder.  Etc.  That is, as it looks ahead farther, the problem gets exponentially bigger.

Since Deep Blue was created in the mid 1990’s, we’ve gone through about 10 doublings of processor power, so computers are about a thousand times faster… so in the last 20 years, we could’ve added on two moves or so to Deep Blue’s lookahead.

But what about the magic of exponents?  How come, in 20 years, we aren’t looking ahead 100 moves, or 200?  Because the problem is also exponential.  In fact, it’s a higher order exponential than Moore’s law.

AI is replete with NP-Hard problems, or problems which aren’t understood well enough to formally categorize them, but seem likely to be at least NP-Hard.

The upshot of all this is that even if we have many years of exponential growth ahead of us in terms of processing power, that may only result in linear growth of the sophistication of AI — if the important problems in AI are themselves exponential.

But third and most importantly:  AI is not, I think, primarily constrained by computing power.  It’s primarily constrained by programmers who don’t really know how to write a really good AI.

If you just naively take Watson and throw amazingly fast computers at it, it may get noticeably better than it was.  But it won’t become a general purpose AI, and no matter how fast the hardware is, we don’t know how to make general purpose AI.  We don’t even know how to make pretty-specific purpose AI that’s a little more general than Watson.  There’s no Moore’s Law for our understanding of what we’re even trying to do when we write AIs, and there’s no really good reason to imagine that the primary constraint on us is hardware-related.

There are a few hail-Mary plays out there.  One idea that has always appealed is that if we get fancy enough hardware, we could just throw some simple machine learning programs together and then an AI would build itself out of a learning or evolutionary loop.  It’s been tried, and it’s always failed, and failed pretty spectacularly, but maybe the hardware just wasn’t there.  Or maybe a genius will hand us some key insight that will open up a path that we can traverse down towards AI.  That’s possible.  But then again, maybe not.

Long story short:  Exponential growth doesn’t get us to AI.  There are reasons to be cautiously optimistic about the possibility of real productivity-enhancing AI in our lifetimes, but there’s no simple path there, and there’s not even a really good reason to believe that we’re “on the back half of the chessboard.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s