With the announcement that Ray Kurzweil is giving the introductory keynote address at this year’s SLCC (Second Life Community Convention), there’s been some resurgence of the whole “Are Second Life and AI and nanobots and stuff going to transform the world tomorrow, or will we have to wait until next week?” meme, and related thoughts.
New World Notes calls the announcement “extraordinary and transformational”, which strikes me as way over the top. (I mean, even if you think Kurzweil’s thinking is extraordinary and transformational, it would be a weird thing to say about a single talk, much less about the mere announcement of a talk.) I replied in the comments (lightly edited):
/me grins. “extraordinary and transformational” is a tad strong, I think. He’s done some really good work in OCR, speech recog, and cool musical instruments, but he’s kinda over-the-top in the AI and virtual reality realms.
One of his most famous charts is that hysterical one showing number of neurons a computer can simulate over time, and implying that by the year whatever computers will be smarter than people. As if the hard problem in AI was getting enough transistors on a chip! (Example: a mouse is higher on his chart than the Deep Blue chess-playing computer; but how good is your typical mouse at chess?)
His ideas about virtual reality are fun, but again I think overblown. When I’m wearing these glasses and “walking around” in a completely immersive virtual world, explain to me again how I avoid tripping over my real-world chair and walking into walls? And 10 or 20 years seems like a wild underestimate for people having brains full of nanobots. The things he says are cool-sounding, but I think he’s drifted away from practical fact in various ways.
I’m sure he’ll give an engaging and thought-provoking keynote, but these days he’s really more of a showman than a technologist; it will be fun, but hardly extraordinary or transformational. The danger with Kurzweil is that he goes beyond the factual or even the plausible, makes the techies roll their eyes, and builds up unrealistic expectations in the audience that, when they are not matched in reality, could lead to a backlash of (similarly unwarranted) skepticism.
And then, in reply to some good words fro Extropia DaSilva:
I think one of the things that somewhat makes me roll my eyes about Kurzweil is that he has a number of things like that chart: the most obvious message is an extremely exciting, but wrong, one (in this case, that we’ll have computers as smart as people by year nnnn), whereas if you read him carefully enough he’s actually using it to make a claim that’s more plausible, but much much less exciting (in this case, that by year nnnn we’ll have overcome one of the very minor challenges in making smart computers).
If all he’s really saying is that we’ll have solved the easy problem, why did he bother to make that chart at all? Where is his chart of progress in the software / semantic side of the problem (which would be essentially flat)?
I share your skepticism about his claimed timescales. This sentence is another example of the tendency I posit above: “we are learning to build artificial brains that are getting closer and closer to matching the power and performance of the biological version”. Taken at face value, with “closer and closer” meaning that we’re pretty close, it’s exciting but false. Taken more literally, with “closer and closer” meaning “we’ve gone from a thousand light-years away to 999.9 light-years away”, it’s true but boring.
I think Kurzweil’s right about the exciting things that people will be able to do in the future. I think he’s wrong about how much progress we’ve currently made in those directions; and that’s a big part of his message.
Really I think it’s good that they got Kurzweil to come and talk; he’ll stir things up. People don’t have to be right to be interesting, or to inspire useful discussion and even useful work. Which is good, because I don’t think Kurzweil is right. :)
The great thing about this, is that it would make an awesome avatar back end intelligence with very little work.
which rather disappointed me, because Desmond is usually more sensible than that. Cyc would do no such thing; at most it would help slightly with one of the many problems that we are light-years away from solving in “avatar intelligence”. Of course, if someone can prove me wrong about that with very little work, I hope they do. :)
This all reminds me of that widely-blogged demo where some folks made a program-controlled avatar (a ‘bot) called “Eddie” that supposedly was able to reason at the level of a four-year-old. Looking into it more deeply what they’d actually done was a small demo of how a program could be explicitly programmed to model a particular problem about belief-understanding in such a way that it was about as good at it as a four year old person would be. Which is probably a good piece of research and a fine use of time, but the impression that people were getting from it, something like “we can now have Second Life bots that are as intelligent as four-year-olds”, was just completely wrong.
Another recent example of this, I suspect, is that “Milo” demo from Lionhead. In this case the maker of the thing is making pretty amazing-sounding claims about it (including that what they are doing goes beyond anything in science fiction!), but I strongly suspect that the reality behind it is much more modest. (Which is to say, my “rigged-demo” detectors are pinging hard the whole time.)
(Reminds me also of that “OnLive/OTOY” demo of how advances in server-side rendering are going to give us all the ability to get to Second Life at 60 fps from our cellphones any day now. Uh-huh.)
And on the other side Second Thoughts has now spent three whole entries on how anyone who says favorable things about AI and nanotechnology and life extension and transhumanism and stuff like that is a crypto-fascist who wants to take over the world, in typical flaming-at-straw-men fashion. Not that straw men don’t make a nice fire. :)
I find that I don’t have a simple opinion about all of this stuff, myself. I think science is, overall, a good thing; figuring out how the world works and how to make it work more the way that we want it to is good. Exactly what “we” means there, just who (if anyone) should be in charge, what should happen when what I want to do (whether enabled by science or not) conflicts with that you want to do, are all hard questions. In general I’m a left-libertarian in some sense; I think that the government should leave us alone unless we’re actually harming or defrauding someone, and that it’s nice when what we choose to do with that being-left-alone is to be nice to each other, to share things, to sit around wearing flowers in our hair and playing the guitar, and so on.
Along with that, it’s good to think about all sorts of wild stuff that some of us might want to do in the future, like modify our bodies to be able to live in space, like developing devices that are actually intelligent, like making itty bitty machines that can swim around in our bloodstreams and keep us healthy. And as we think about doing those things, and start to even do them, the same principles apply: we each should be allowed to do what we want if it’s not hurting anybody, and it’s nice when we do it in nice cooperative ways involving guitar music.
Hm, I’ve been rambling here, what was I going to say? Oh, yeah: and while it’s fun to have some people around (Ray Kurzweil, Peter Molyneux of Lionhead, and so on) who make it sound like things are farther along than they really are (because that makes us hopeful, and stirs up debate), it’s even better to have, when we can get it, realistic estimates of what’s really going on.
Because truth is good, too.