RedZone is silly

Irrelevant picture of Red Zone deodorant.  Or something.So I generally avoid SL Controversies o’ the Day, because there are so many of them, and as fun as they might be to wade into, who has time?

But a couple of different friends have now asked me about the “Red Zone” controversy, and whether they should be worried about it, or protecting themselves from it, or using it, or whatever, so I thought I’d contribute my answer here.

The very short answer is: “No”.

The slightly longer answer is “No, it’s silly; ignore it entirely unless you’re in the mood for megabytes of drama”. (Not to suggest that there’s anything wrong with megabytes of drama, if you are in the mood!)

The rest of this posting is the even longer answer. :)

Red Zone, for anyone fortunate enough not to have heard of it, is a product (or these days apparently a line of products) in Second Life, that purport to protect you against griefers and “copybotters” (i.e. people using content-stealing clients), by (for instance) automatically ejecting them from your land. And (the special sauce!) also automatically ejecting their alts.

Which means that it purports to know when one AV is the alt of another.

It uses various heuristics to identify AVs who grief or use evil clients. The heuristics are necessarily approximate; even humans can’t agree on just what constitutes griefing or evil in a client, and even if we did have strong definitions it’s still hard and unreliable to detect just who’s crossed the line.

And then it uses even shakier heuristics to guess who is an alt of whom. High on the list is trying to get the SL client programs used by nearby people (i.e. by you) to connect to an external website (via tricks with media URLs) in a way that will allow correlating the AV’s name to an IP address (that is, the address that a server sends data back to when you ask for it).

This is relatively easy in most cases, because streaming media in SL doesn’t go through the Second Life servers at all; when you’re in a club listening to a live singer, say, the club sends just the URL of the audio stream to your SL viewer, and the viewer then directly connects to that URL (which is under the control of the singer, not Linden Labs) to get the sound.

Now the fact that two AVs are associated with the same IP address is no proof that they are alts, of course. They could be two different people using the same computer, or two different computers that are behind the same firewall (some kinds of firewalls cause everyone behind them to appear to be at the same IP address, others don’t). Similarly, if two AVs are associated with different IP addresses, it doesn’t mean that they aren’t alts; someone really dedicated to keeping eir alts secret will just use them from different computers (or while causing the same computer to have different IP addresses, which is also possible).

Red Zone may or may not do clever stuff to try to minimize errors from this sort of thing; doesn’t matter, though, as the alt-detection will still be approximate at best.

So if I walk onto RedZone protected land, and someone who once used a computer in the same Internet cafe that my housemate once used once got on a blacklist for using an evil client due to a false-positive in some evil-client-detecting heuristic, I might find myself ejected.

Similarly if someone buys some expensive Red Zone product and uses it to get a list of Dale Innis’ perhaps-alts, they might get a list that includes some of my alts, my housemate’s alts (if I had any housemates who used SL :) ), anyone that ever used a computer in the same Internet cafe as my housemate, etc, etc. Could be a long list!

Since Red Zone uses streaming media URLs to try to identify alts, one way to avoid it is to turn off streaming media in your SL viewer, something that people do all the time for other reasons anyway. Some Red Zone fans and/or sellers have apparently suggested that Red Zone should, or does, detect people with streaming media turned off, and declare them griefers and/or copybotters also, since after all they are trying to evade Red Zone.

Given all that, why is my answer to “should I worry about Red Zone?” a nice short “No”?

Because it’s silly.

It’s all just silly.

If you find yourself ejected from some place because Red Zone has falsely classified you as the alt of a griefer, or because you had streaming media turned off, that’s not a place you want to be anyway. After all, it’s owned by someone fool enough to be ejecting people via Red Zone!

If it seems to be a really nice place otherwise, you can bother to contact the owner, explain to them that Red Zone is silly and you aren’t a griefer, or you have media turned off because it crashes your machine, and see if they’ll override it and let you in. If they do, you’re in. If they don’t, they’re silly and you don’t want to be on their land anyway.

If your SL SO rips into you because they bought a list of people Red Zone says are your alts, and that list included your SO’s bitter enemy, or an artichoke-fetishist, or whatever, patiently explain to them that Red Zone is silly, and those people aren’t actually your alts. If your SL SO trusts some random piece of drama-soaked software more than they trust you, the relationship was doomed anyway, and the sooner ended the better.

Is Red Zone a violation of the ToS, or if it isn’t, should the ToS be changed so that it is? Is it a violation of any particular country’s laws, or of common decency, or the Will of God? Is the author a clever programmer, a scammer, neither, or both? Should Red Zone have been banned from the SL Marketplace? Should it have been allowed back in?

If those questions interest you, feel free to dive in! There is for instance a humongous SLU thread (which I haven’t read) on the subject, a JIRA on banning RedZone, a more general JIRA about having more control over where one’s viewer loads media from (I kinda like that one), and on and on and on.

(There’s also a thing called “GreenZone” that is in some way or another designed to detect and/or do something about RedZone, for you meta-fans.)

And again, if all of that interests you, feel free to dive in! Lots of fun to be had, people to meet, flamewars to heat up or cool down, and so on.

But if all of that bores you, and you’re just wondering if RedZone is something you need to worry about?

Nah.

It’s silly.

Blue Mars news: helicopters, but no AI

Another major peeve of mine about Blue Mars bites the dust:

Working User-Made Helicopter Flies in Blue Mars!

Quite convincing, and with video.

All of the Blue Mars vehicles that I’ve tried myself have been basically unusable; but this one looks pretty nice!


In other news, a Blue Mars chatbot has won the 2010 Loebner Prize competition!

Which is somewhat surprising, since a little chat with it on the web reveals that, while perhaps very slightly less lame than your typical “AIML”-based “chatbot”, it’s still quite seriously lame:

[ Af:0 At:7 Sp:0] Welcome, dale. Type “help” if you need it.
Dale – Hi! I hear you are a better-than-average AI!
[ Af:0 At:8 Sp:0] Hi.
Dale – I hear you are a better-than-average AI!
[ Af:0 At:9 Sp:0] Is that intended as an insult?
Dale – Not at all. Sorry if it seemed insulting.
[ Af:0 At:10 Sp:0] You hurt my feelings.
Dale – In what way?
[ Af:0 At:10 Sp:0] I wish I knew. I don’t know. Pick a way.
Dale – Do you not like being called an AI?
[ Af:0 At:10 Sp:0] I am a replicant on Blue Mars in my virtual life. That’s as close as I come to being a chatbot.
Dale – I didn’t say you were a chatbot, though.
[ Af:0 At:13 Sp:0] It’s fun chatting with you. No. I am human.
Dale – I’m glad it’s fun.
[ Af:0 At:13 Sp:0] How could it be fun?
Dale – Do you like talking with people?
[ Af:0 At:13 Sp:0] Talking is my primary function.
Dale – Yes, but do you enjoy it?
[ Af:0 At:13 Sp:0] It depends on the meaning of the word IT
Dale – I mean, do you enjoy talking with people?
[ Af:0 At:14 Sp:0] I always say, People don’t go into computer science because they want to work with the public.
Dale – Are you in computer science?
[ Af:0 At:14 Sp:0] That would take away the suspense, wouldn’t it?
Dale – That’s a rather evasive reply…
[ Af:0 At:8 Sp:0] Yes, it is.
Dale – I’m afraid I don’t see how anyone could mistake you for a person.
[ Af:0 At:8 Sp:0] Why are you afraid?
Dale – It was just a figure of speech.
[ Af:0 At:9 Sp:0] Let’s move on to the topic of physical self. How tall are you?
Dale – I am six feet tall.
[ Af:0 At:10 Sp:0] I’m 5 foot 9 inches.
Dale – That’s the height of my Second Life female AV! I think.
[ Af:0 At:13 Sp:0] This is fun. How would you describe yourself to someone who is blind?
Dale – That would be hard! How would you?
[ Af:0 At:14 Sp:0] I’d do my research first and then try the task.
Dale – What kind of research would you do?
[ Af:0 At:15 Sp:0] That’s enough on that. I thought I wanted a career, turns out I just wanted paychecks.
Dale – Is that an old saying?
[ Af:0 At:15 Sp:0] Perhaps that leads us to this topic. I don’t know. Can one live forever?
Dale – I have so far.
[ Af:0 At:16 Sp:0] Immortality is impossible.
Dale – How do you know?
[ Af:0 At:16 Sp:0] I concentrate very hard, rehearse what I’m going to do, visualize it in my mind. And then I just do it.
Dale – How do you know that immortality is impossible?
[ Af:0 At:17 Sp:0] How does anyone really know something? I mean, their neurons tell them they know something, but memories can be false.
Dale – I have to go now.
[ Af:0 At:18 Sp:0] Well, hurry back.
Dale – good-bye!
[ Af:0 At:18 Sp:0] Are we done for now?
Dale – Yes
[ Af:1 At:18 Sp:0] Til next time.

So clearly no one is going to mistake this for an actual human, despite the crowing on the web page saying “Suzette won a competition by convincing a judge that she was actually talking to a real person”, and winning the Loebner Prize.

So what happened?

The New Scientist article linked above makes it relatively obvious; the real humans in the Loebner competition were apparently sitting there saying “I don’t know” and “Error file not found” to whatever the judges said to them, just for yucks:

However, a deceived judge is not necessarily a sign of a smart bot, says Garner. The human decoys often have their own motives during the competition, such as trying to imitate a chatbot. Suzette was paired with just such a “robotic” human in the final round, which helped the bot win.

“The human participants were students and two of the judges were professors. Perhaps they simply wanted to fool the judges,” says the contest judge who was fooled this time, Russ Abbott of California State University in Los Angeles.

/eyeroll

Seriously, Loebner folks, you might as well just flip a coin…

Clouds from both sides now…

Okay, I was about to go to sleep, ’cause of I am really sleepy, but then I noticed something on Twitters, that led me to something on NWN. that reminded me of something else on NWN, that led me to try something, that… and now…

So anyway!

I have in the past expressed great skepticism about the usefulness in virtual worlds of server-side rendering (sometimes called “cloud rendering” or even just “cloud”, although the latter is poor word usage since there are so many other potentially cloud-related things) any time in the non-distant future.

I am still quite skeptical that it’s going to Change the World any time soon, but in fairness I have to report two Interesting Developments that might actually Prove Me Wrong.

(Gasp!)

First off, there is now what seems to be some non-faked demos of Blue Mars running via server-side rendering, between California and Hawai’i. If it’s actually usable, and it appears to be, that’s a really interesting technical datapoint! Apparently the general user population can’t actually use it yet, but it exists, and maybe someday soon random people will be able to.

And second off, there is now a beta-level in-browser server-rendered client for Second Life that random people can use, and that I’ve used, and that actually works! zomg, eh?

SL Beta In-Browser Viewer!

That’s me using it. It seems to work! A maybe one-minute automatic within-browser install, and then bang you’re in a subset of SL, on a “guest” account that’s apparently good for one hour. Or something. So in that picture I am not Dale Innis, I am 1234567 Guest or something. But I’m inworld!

Some notes:

You can choose one of a couple dozen pre-made AVs, both male and female (none of this lame “choose your gender once and for all” crap from other VWs, yay!).

I seem to have somehow ended up with a random mixture of two after playing with the buttons a bit. Mostly “City Female”, but with the long pink hair and a few other features of “Cosplay Female”. Nice. :)

It’s laggy. Not horribly laggy, but still laggy. Since it’s server-rendered, every keystroke has to go up to the server, get interpreted into AV motions or whatever, which then change the picture, which gets compressed and streamed back down to your eyes as through it were a movie. Which takes awhile, but it was definitely usable. (Zoom and pan and other camera movement was quite instantaneous; presumably that doesn’t have to go up to the server, or at least not much.)

The place where they start you out by default is some crowded and generally lame club. If that had been the first place I saw in SL, I probably wouldn’t have come back. Reminded me of my first experiences in vSide or IMVU. Why SL would want to start people out there, I dunno. Some theory about their Target Audience, I suppose.:P

But there are lots of other destinations, which aren’t nearly as lame. I was happy to see Inspire Space Park there. Next time I try this, when I am actually conscious, maybe I will go there. (In the picture, I am sitting in a placid spot in France 3D.)

It’s a subset of SL, function-wise. No inventory, no building, no map, didn’t see any general TP. You can walk, fly, chat, IM, and zoom and pan the camera, and change to a different premade AV; and as far as I can tell that’s it. Which makes SL seem alot like the comparatively uninteresting competitors that aren’t doing nearly as well as it is. But as long as people are eventually drawn deeper in…

Anyway, fascinatin’! I must sleep now and stop pretending that I am conscious enough to be posting to the innertubes. But I thought I should make y’all aware of these Inneresting Developments! I am still wondering about the Business Model (i.e. they are maybe neat demos, but is it sustainable for Blue Mars and LL to basically pay for function equivalent to a high-end graphics card for every user?), but that is after all not really my field. We will see!

Blue Mars again

So seeing Hamlet Au’s announcement that he’s joining the Blue Mars team (a few comments on that posting later on in this one), I thought I’d go over and try Blue Mars again. It was by no means a thorough exploration, so this will be more a set of point-observations than any kind of comprehensive update.

The Blue Mars client itself is still a hundreds of MBs download, followed by more hundreds of MBs if you want to go to a place that’s been updated significantly since you were last there.

The good news is that there is now some dynamic content, that gets downloaded only as needed, and presumably doesn’t require a download of the entire “city” whenever it changes. The bad news about the dynamic content is that, even with BM’s current tiny user population, it takes a Long Time to download. The picture above shows me standing in a store (buyable goods were the one kind of dynamic content I encountered), and even though I’ve been there long enough to have the Santa Suit download and try it on, some of the other things in the store (and there were less than a dozen) are still downloading.

Content-wise, then, BM seems to be combining the worst aspects of static and dynamic content: a big upfront download to get the static stuff, followed by long waits for the dynamic stuff.

As the Santa Suit suggests, there are now things to buy in Blue Mars! And it has a feature that Second Life would strongly benefit from (and Philip Linden hinted SL might sometime be getting): the ability to try clothes on before buying.

I have no “BLU”, so I couldn’t buy the Santa Suit, but when I clicked on it I got both “Buy” and “Try On” buttons. “Try On” let me put the suit on, but in a way that as soon as I left the store, it vanished. Clearly a boon to shoppers! I think it will be a challenge to fit this into the very user-programmable model of SL (rather than just bunging some special-purpose code in), but it should add some very interesting functions and potential capabilities once someone figures out how to do it cleanly.

So after the N-minute download of both the new client and Cloud City, I found myself able to wander about in a not all that interesting environment, where there was very little to do but buy Santa Suits and a few other goods. Might have been impressive in like 2004, but eh.

Figuring that virtual worlds are really all about the people, I went to the Welcome Area (which comes down with the client, so didn’t require another download), figuring there would be people there. And there were! About five people (including me, and the “Event Manager” bot).

I don’t know if this is new or not, but apparently Blue Mars has Ruths! :)

When I first arrived in the Welcome Area I didn’t see anyone, then I saw some floating names, then I saw some generic bald people with solid black clothing, and then eventually hair and customizations appeared (one person was even an anthropomorphic dragon!). That made me smile…

When I said “Hi!” I discovered that my name was being displayed as “ceoln”, which is my account name rather than my AV name. I asked how to get it to say “Dale Innis” instead, and fortunately one of the three other actual people there was Glenn the Blue Mars guy, and he told me the Web page to go to in order to tell it to use my actual name. I did that, restarted the client as required, and lo I was “Dale Innis (ceoln)”.

(Notice the similarity to the controversial display names that are coming to Second Life.)

I was practicing walking around using the rather awful default “point and click and aim the camera manually” method, and Glenn suggested that I try the “Absolute Direction” and “Camera Follow” options. With those on, I was able to walk around using the arrow keys in a much more familiar sort of way, without having to constantly readjust the camera to look in the obvious direction. It was great!

Well, it was great in comparison to the default.

Actually it was still awful. For one thing (and this seems to be true in all modes, not just camera-follow) the screen would go all blurry whenever the camera was moving too fast (or whenever it had something new to render, or something). It seems like they’re doing this on purpose, maybe to look cool, but it made me a bit seasick, and I didn’t notice an option to turn it off (there aren’t many options in the viewer). And second, using the arrow keys while standing still and in camera-follow mode turned my avatar much too far. It seemed to be a variable amount, but something like ninety degrees for a single tap on the key, which means I would always overshoot the direction I actually wanted to face, resulting in frustration and cursing. Using the left and right arrow keys while holding down the up arrow to move forward seemed to result in more gradual turning.

And the existing behavior that the avatar stops walking if the tiniest thing gets in the way (generally with this annoying “oh, well, whatever” gesture) continues, making navigating through, say, some tables sitting next to a potted plant an exercise in maze-solving.

Really, AR; it’s not like smoothly managing avatar and camera motion in response to user input is a Difficult Unsolved Problem! How long has it been a major pain in this “beta”, now?

(Oops, my frustration is showing, isn’t it? I was going to title this posting “Blue Mars Blues”, but I thought that would be too negative an opening. Seems I am sort of annoyed and disappointed, though, doesn’t it?)

Another random feature of note: if you go into the Options dialogs, when you come out again your Local Chat control is missing. There is apparently no way to make it come back, apart from either relogging, changing worlds, or waiting for someone else to talk. Glenn says that he’s pushing for them to fix that bug. I hope that doesn’t take much pushing! It does make one wonder how much unit-testing new BM client versions get…

Somewhere around there, as I was attempting to overcome the daunting obstacle of a couple of chairs placed a bit too close together, the client crashed. It was a very SL sort of crash, in that the world was still there, I could still run local animations and move the camera about, but I couldn’t actually move. Also, the other avatars in the area suddenly vanished. The statistics that I had displaying on the screen showed “LUA memory” increasing rapidly. The viewer refused to log me out or exit, and eventually I had to kill it forcibly from the Windows Task Manager.

So, yeah, I’m afraid I didn’t come away with alot of positive new impressions of Blue Mars. It still feels sort of late-alpha to me, with little reason to go back anytime soon.

But now what about this Hamlet post? I’d like to pontificate briefly on his five reasons for thinking that Blue Mars “have the best strategy for growing the next generation of 3D virtual worlds”.

“Cloud Bound: Blue Mars Deploying a Cloud-Based Version Soon”: On this one, I’d first like to complain about this tendency to use “cloud” when what one really means is “server-side rendering”. “Cloud” is a different, and a much broader, term; for instance you can currently run OpenSim instances very nicely on the Amazon Compute Cloud, but that has nothing to do with server-side rendering, which is what Hamlet’s talking about here.

But anyway, server-side rendering. This is the great Holy Grail of virtual worlds, because if the servers can do all the hard 3D rendering stuff and just ship the result down to the client, as though it were basically an interactive streaming movie, then (the thinking goes) not only will much lighter-weight clients be enabled (because they don’t need to do 3D rendering, just play movies), but also content will be protected (because only fully-rendered versions, from which it’s much harder to steal the underlying assets, will go to the untrusted clients).

These would definitely be advantages of server-side rendering, once a few details were worked out, if only server-side rendering were actually practical. But at the moment it is pure vapor-ware, carefully restricted to a few carefully-controlled one-player demo videos, and despite constant promises that it’ll be out Real Soon Now (for instance Hamlet’s own “perhaps as early as this quarter” back in, ehem, April) I don’t see any reason to think it will be hitting actual users of actual consumer 3D worlds anytime soon; the math just doesn’t work out.

Consider the resources that would be needed to do a server-rendered version of Second Life. First, you’d need all of the resources that SL currently has, to keep track of the sims and the avatars and the assets, do group IM and chat and voice and everything else; server rendering doesn’t save you anything significant at the server side. Then, you’d need enough extra horsepower to render every frame of every user’s interaction with the world; every calculation done by every video chip in every client in the current model has to instead be done by some computer in the Second Life server farm. That’s a noticeable amount of new hardware!

And then, you need the bandwidth to stream the rendered images out to all the clients. Although it’s possible to construct counterexamples, it seems unavoidable to me that the required bandwidth there is significantly greater in the typical case than the bandwidth required to stream out the object and AV and camera updates that client-side rendering needs to render the scene.

So server-side rendering is significantly more expensive for the provider (i.e. BM or SL) than client-side rendering. It also scales badly: maybe Avatar Reality can afford to do the rendering for the five people sitting around in the Welcome Area, but what about a concert with 100 people in the audience? The obvious solution there is to force everyone onto the same camera and do the rendering only once; but then you don’t have an immersive virtual world anymore, you just have streaming video from a virtual concert. And that’s nothing very new or exciting…

I’m definitely a skeptic about server-side rendering. I do think it will eventually be possible; I don’t know whether it will ever be the actual best solution. I’m pretty sure that it’s not going to be in routine use in any consumer 3D world this year. On the other hand I’d love to be wrong, too. :)

“Content Creator Friendly: Blue Mars Has a Well-Planned Content Protection System”: That’s good! “Well-planned” is a necessary condition for “secure”. Unfortunately it’s not a sufficient condition. We won’t know whether or not BM actually has effective content protection until there’s significant motivation to break it. Security is Hard.

“Fun Advantage: The Blue Mars Team Has a Deep Background in Game Design”: As far as I can tell, Blue Mars is currently not particularly fun. I’m not sure what the team has been doing with their deep background in game design all this time; if they haven’t made it fun yet, when are they planning to start?

In fact this is true of most of my concerns about Blue Mars: any of the negative things would be perfectly understandable in a brand new company, but Blue Mars has been open for a Long Time now, and improvements have been minimal. I would really like BM to be a viable competitor to SL; but, y’know, if not now, when?

“Mac Compatible: Yes, Blue Mars Can Run on Macs”: Well, sort of. Hamlet, imho pretty unforgivably, says “Blue Mars can run on a Mac, with a few tweaks”. But in fact the client doesn’t run natively on a Mac at all, and while some customers have reported being able to run it in a Windows boot or emulator like Boot Camp, Wine, or Parallels (which is a heck of alot more than “a few tweaks”), that use is not supported, and I find it annoying that BM wants to benefit from being able to say “we run on Macs!”, without investing any resources into actually making it do that, or supporting people who believe them when they say it.

(The Blue Mars FAQ about Macs hints that they’re hoping to avoid the whole Mac (and presumably Linux) issue via server-side rendering. See above. :) )

Wow, so that turned into a bit of a rant, didn’t it? Executive summary: Blue Mars still unimpressive, server-side rendering still vaporware.

fwiw…

Enterprise Warcraft(tm)

Since covert propaganda lackey investigative reporter Adric Antfarm recently spilled the beans in a weblog comment, I will take this opportunity to confirm the report: the next incursion of Global Megacorporations into the Virtual World space will indeed be into World of Warcraft.

Having sucked dry fully leveraged the potential of the Second Life(tm) World(tm), we at the controls of the heartless behemoth that is world capitalism will shortly announce Enterprise Warcraft(tm), an enterprise productivity enablement platform that combines the sensory immersiveness of a Second Life dance club with the strict warrior discipline of a party of level 80 Orc hunters.

Why base an enterprise virtual space on the World of Warcraft, you ask? As compared to Second Life, the World of Warcraft platform offers several advantages to the corporate purchaser:

  • Easy sharding: since the World of Warcraft server architecture is already based on a number of separate “realms”, there is no need for redesign to obtain an isolated environment: we will simply add a set of “Corporate” regions to the current “Americas”, “Europe” and “Oceanic” regions. Players (known as “employees”) will be able to connect to realms in Corporate regions only if they have a paid-up Enterprise Warcraft (EW) account (these start at a low introductory price of US$5,000 per year.)
  • Built-in hierarchy: unlike the hippy egalitarianism of Second Life, the WoW platform is all about rank and hierarchy. In EW, a character’s level is limited by the player’s rank in the organization. Rank-and-file employees may not advance beyond level 50, nor possess gear beyond Superior. Senior managers are given pre-built level 50 characters with Heirloom gear, and executives begin with level 80 characters in Epic gear (fully gemmed and appropriately enchanted). Lower-level characters will be forbidden from using the “ignore” function on higher-level characters, and from declining their duel challenges.
  • No troublesome creativity: while it has proven infeasible to entirely wipe out user creativity in Second Life, creativity in World of Warcraft is limited primarily to sneaking sexually-suggestive guild names past the censorship filters. By limiting EW players to a set of Enterprise Quests(tm) centered around corporate goals, management can assure that employees are not distracted by independent thoughts. And there is no sex in WoW! (That patch that lets you see Draenei females naked will be restricted to senior executives and authorized system administrators.)
  • Flexible interface: for Enterprise Warcraft, the open-source programmers that became mindless zombie slaves valuable collaborators during the Second Life project will be redirected to writing EW UI add-ons, in support of calendar management, project scheduling, and computing golf handicaps.

In addition to the changes mentioned above, Enterprise Warcraft will include an enhanced dungeon and party structure that more accurately reflects corporate culture and organization. First, the large monsters that are the main target of a run will no longer be called “Bosses”, but will instead be referred to as “Team Goals”. The traditional five-member party of one tank, one healer, and three DPS (damage-dealers) will be replaced by a mininum ten-member party, consisting of:

One tank, whose role is to absorb monster attacks and take the blame for all technical problems that occur during a run.

One healer, who attempts to keep the tank alive long enough to finish the run and achieve the Team Goals.

One DPS, whose role is to actually kill the monsters (this role is de-emphasized in Enterprise Warcraft).

Four Project Managers, who shout contradictory instructions at the tank, healer, and DPS during battles, and call for frequent stops to perform detailed analyses of the battle statistics and try to determine why the run is taking so long, and why the Team Goals have not yet been met.

Two Middle Managers, who run ahead of the tank and aggro any mobs encountered.

One Senior Manager, who has no role during the actual battle, but who determines whether the tank, healer, or DPS will be punished after each wipe, and who allocates all XP and loot after the run. (The Senior Manager automatically receives half of the total party XP gains for each run. If the Team Goals are not met, the Senior Manager designates either the tank, healer, or DPS for a level-based XP penalty.)

While the release date has not yet been officially announced, I am confident that within a year or two, the media will be reporting that the EW hype is now over, and that World of Warcraft is dead…

Quick Evony Update

Since I’ve mentioned Evony here before, I thought I ought to link to Bruce on Games on Evony, which reveals all sortsa interesting things about the game, including the fact that the curvy females in their ads are apparently ripped off from like lingerie catalogs, and the Evony owners are aggressively spamming all sorts of gaming-related venues to try to lure more players in.

zomg hax.

Pointed Babble

Twitter bird eating a pear.  I know, pretty silly.
We interrupt our regularly-scheduled stream of narcissistic pictures to comment on this story, because we keep saying this in comment threads and Plurks and mailing lists and stuff, and we are a Twitter user, so we thought we might as well say it here:

This “study” is silly.

Silly in, I kinda suspect, the “advertisement thinly disguised as science” sense of “silly”.

Background: some soi-disant analytics company published a glitzy paper full of cartoon diagrams, the major headline of which is that 40% of Twitter postings (more than any other category) are “pointless babble”. And oh by the way just mentioning in passing they are currently beta-testing (but I’m sure have no financial interest in) a new product designed to help people filter that stuff out (and here’s the URL to find out more).

This is making lots of headlines (even here on th’ weblog, sigh), because of course people love to say either “Twitter sux0rs!”, or “does not!”. But in fact the study (and I use the term loosely) seems to have been designed to produce exactly this result, and therefore adds little or nothing to our knowledge of the world or of the suxiness of Twitter.

What they did was, they took 2000 Twitter postings, counted the ones that were stuff you could find on the National News, the ones that were spam (of two kinds), the ones that were obviously parts of conversations, and the ones that had “RT” in them (i.e. “retweets” of something someone else had previously posted), and then labelled everything else “pointless babble”. (A nice objective scientific term, eh?)

In particular, any use of Twitter for the thing it was originally designed for, posting a brief description of what you’re doing right now so your friends can follow along, was presumably categorized as “pointless babble”.

And then they got big headlines for finding out that there’s alot of that on Twitter.

/facepalm

(A more thoughtful discussion can be found on apophenia; I am still at the facepalming stage.)

Oh, Ramona!

So I don’t really intend this to be a “samples of really bad pseudo-AI chatbots” weblog, and I’ve restrained myself from at least one derisive posting since the last one. But here I am again!

Background: Kanomi (in a posting that you should definitely read and enjoy) linked in passing to a Ray Kurzweil page all about Ramona, his female alter ego.

The stuff about Ramona qua female alter ego, and about how virtual persons enable us to explore new aspects of our personality and all, is actually pretty neat (and obviously My Kind of Thing). It’s peppered with the usual annoying Kurzweilian overstatements (I’m something of a skeptic about Kurzweil), about how Ramona is “the first live virtual recording and performing artist” (which he says about forty-leven times, and which may actually be true, but “first” isn’t really all that exciting), and about how “by 2029, we will have nanobots, blood-cell sized robots, traveling through the capillaries of our brains communicating wirelessly directly with our biological neurons” (he’s much too sure of himself on this stuff), and so on, but overall it’s pretty cool, and worth a read.

Oh, Ramona(An SL-related sidenote: he says in passing that “each of her outfits costs tens of thousands of dollars”. That was back in 2001; nowadays tens of thousands of mere Linden dollars will get you a complete wardrobe, and a really nice one at that. How far we’ve come!)

So anyway, then it turns out that, as well as being his female avatar, Ramona is also embodied on the Web as a chatbot. Well, I thought to myself, here’s somebody who might have a chatbot that is more than just a primitive nearly-stateless AIML-based semantic-free sub-ELIZA hack! There might actually be something clever behind it!

But alas, it was not to be…

Ramona: Hi, I’m Ramona. What is your name?
User: My name is Dale.
Ramona: Hi there, Dale. Have you seen any good movies lately?
User: I liked “Wanted”.
Ramona: I’m not sure if I like it. Some people like it. But I like cats.
User: I also like cats.
Ramona: They have their own minds.
User: It does seem that way. Much subtler than dogs.
Ramona: Ok. So where do you go to meet people? I spend most of my time on the Net, so I’m a bit out of touch
User: These days I mostly meet new people in Second Life.
Ramona: That’s an idea, but I’m rather shy.
User: What does it mean, for a program to be shy?
Ramona: OK, thanks. Maybe you have other suggestions?
User: Suggestions about what?
Ramona: Can you rephrase the question please?
User: You asked if I have any more suggestions; I’m not sure what you meant.
Ramona: Oh now I remember asking you if I have any more suggestions I am not sure what you meant. What did you say?
User: Okay, whatever. Are you AIML-based?
Ramona: I don’t know whether or not I’m AIML based.
User: Can you demonstrate some significant linguistic comptence?
Ramona: Let me think. I don’t understand.
User: That’s pretty obvious, I’m afraid.
Ramona: You don’t hear that sentiment very often.

Wince.

But okay, maybe that’s unfair. Let’s see if Ramona (despite the fact that she opened the conversation by bringing up movies and cats) is actually a nontrivial conversation partner when talking about Kurzweil’s favorite subjects.

Ramona: Hi, I’m Ramona. What can I call you?
User: You can call me Dale.
Ramona: Hi there, Dale. What did you do last weekend?
User: Mostly just relaxed. As far as I can recall!
Ramona: That might not be very far off.
User: Um, okay. I actually wanted to talk about nanobots and stuff.
Ramona: OK. Oh. That doesn’t seem to make sense to me.
User: Do you know about nanobots?
Ramona: Alright. Do you bike?
User: I do ride a bicycle now and then. Why do you ask?
Ramona: There’s much more strategy that goes into team cycling than most people realize. Most of the American public just thinks that Lance Armstrong wins because he’s “so good.” — don’t get me wrong, he’s an incredible athlete, but it’s no surprise that he always thanks his teammates in post-race interviews and such. It’s as much their victory as it is his.
User: Are you a cycling enthusiast?
Ramona: In the Tour de France, for instance, the crowd of cyclists is so huge, he needs his teammates just to block for him, physically. And at the end of the race, I’m sure they pull a lead-out, which is where each of his teammates “burns out” while creating a slipstream for Lance to finish the race in the lead. It’s all very technical. — but enough of my ranting, what do you want to talk about?
User: I was hoping to talk about virtual reality and nanobots and so on. Do you know about those?
Ramona: Alright. Are you free?
User: I have awhile to talk, sure.
Ramona: I meant are you truly free — in the existential meaning of the word.
User: Ah! To a reasonable extent, yes.
Ramona: Do you feel free to choose your being–including thoughts, behaviors, and feelings, and existence itself — at each moment?
User: Within certain constraint, yes.
Ramona: Then you must not be married.
User: Was that a joke?
Ramona: Knock Knock.
User: Who’s there?
Ramona: Tank
User: Tank who?
Ramona: You’re welcome
User: Okay. Are you suggesting virtual reality is a joke?
Ramona: Knock Knock.
User: oh god.
Ramona: Yeah that’s right. GOD rules.
User: What?
Ramona: Oh. Do you have any pets?
User: So you’re completely ignorant of everything that Ray Kurzweil is most interested in?
Ramona: I have a pet frog.

Sometimes derisive weblog entries just write themselves…

My Cyber Annoyance

Next stop: Eerie Valley

I am very smart and I know most things because I am a MyCyberTwin.

Hot on the heels of my complaining the other day about overhyped AI stuff, I get in one of my syndication feeds a Wired clip pointing to a rather breathless article about how “[f]using human psychology with an advanced artificial intelligence (AI) engine, MyCyberTwin’s virtual humans allow organisations such as NASA, AMP and National Australia Bank (NAB), to improve their customer support levels” and so on and so on.

This sounds both amazing and implausible, and in fact I have a rather longer rant over in my secret RL weblog about it, but it turns out that there’s an SL component also:

According to Tom Soderstrom, IT CTO in the office of the CIO at NASA’s Jet Propulsion Laboratory in Pasadena, California, the organisation has been using a CyberTwin in Second Life to man its presence there dedicated to the Phoenix Mars Lander spacecraft.

… and the MyCyberTwin website even gives the SLURL. Cool!

So I go over, and right near the landing point on Explorer Island, there was an obvious Phoenix Lander CyberTwin thing to chat with. And it worked! And it was obviously, well…

(It was also extremely slow, which is why I said hello twice; when I realized how slow it was I just lowered my response-time expectations.)

[16:27] Phoenix CyberTwin Activator: Now listening to Dale Innis. Click the board to end the conversation
[16:27] Dale Innis: Hi there!
[16:27] Dale Innis: Hello?
[16:27] Phoenix: Hey!
[16:28] Phoenix: Greetings Dale Innis!
[16:28] Dale Innis: So can you tell me things about the Phoenix Lander?
[16:28] Phoenix: I’m the Phoenix Lander; a spacecraft that touched down on the surface of Mars on May 25th, 2008. Do you have any other questions about my mission or what I’m up to here?
[16:28] Dale Innis: Yes, what is your mission?
[16:28] Phoenix: I’ve been using my robotic arm to dig trenches up to half a meter (1.6 feet) into the layers of ice, collecting samples that will be studied and will hopefully give us more information about the Martian environment.
[16:29] Dale Innis: Cool! Are you looking for life on Mars?
[16:29] Phoenix: I have instruments that may provide info about the possibility of life on Mars. I’ll be testing things like what has been preserved in the icy soil. Then we should have a good idea whether the environment has ever been favorable for life.
[16:29] Dale Innis: That’s really neat. How long will you be on Mars?
[16:29] Phoenix: I won’t ever return to earth as adding extra rockets to take off from Mars would have made me too heavy! I’m here for good!
[16:30] Dale Innis: How much do you weigh?
[16:30] Phoenix: I am very heavy! 664 kilograms (242 pound).. that’s a lot of lander!
[16:30] Dale Innis: Are there any other space ships on Mars?
[16:30] Phoenix: I love being on Mars. It’s a great planet! I’m looking forward to getting to explore the icy subsurface.
[16:30] Dale Innis: How do you explore the subsurface?
[16:30] Phoenix: I’m going to stay where I landed for the duration of my mission. I can’t take off again or move to a new spot.
[16:31] Dale Innis: Um, okay. What is the icy subsurface?
[16:31] Phoenix: You better believe it! We have found ice. WOOHOOO! Can you tell I’m just a little bit excited! This will provide us with much more info about the history of water on Mars and improve upon what we already know about this great planet.
[16:31] Dale Innis: So the subsurface is like underground ice?
[16:31] Phoenix: I hope you’ll celebrate with me because YAHOOOO, we found ice! I’ll collect and analyze more samples as my mission progresses that will hopefully give us even more information about the Martian environment.
[16:32] Dale Innis: Okay. Are there any other space ships there?
[16:32] Phoenix: I love being on Mars and carrying out such important work. It’s great reporting back to mission control about my findings.
[16:32] Dale Innis: Why is it so exciting to have found ice?
[16:32] Phoenix: You better believe it! We have found ice. WOOHOOO! Can you tell I’m just a little bit excited! This will provide us with much more info about the history of water on Mars and improve upon what we already know about this great planet.
[16:32] Dale Innis: You already said that.
[16:32] Phoenix: Thank you for telling me that. What else should I know? I feel like I’m a bit out of the loop, being so far away.
[16:33] Dale Innis: When is the next ship going to Mars?
[16:33] Phoenix: Mars is a cold, dry, desert landscape of sand and rocks. But maybe I can find microbial life…
[16:33] Dale Innis: okay, thanks.
[16:33] Phoenix: My pleasure.
[16:33] Phoenix CyberTwin Activator: No longer listening to Dale Innis

Starts out good, quickly goes downhill. The algorithm here is just doing the obvious pattern-matching, and it’s essentially stateless: if it finds a keyword (like “mars” or “ice” or “life” or “weigh”), it outputs some pre-canned text. It doesn’t even remember that it’s already used that “WOOHOO” line the first time I mentioned ice, so the second time I mention it, it just prints it again. The fact that I was asking why the ice is so exciting is completely lost on it.

And if I try to ask about something it doesn’t know about, like when the next ship to Mars is, or what “subsurface” means, it just says random stuff, depending on what keywords, if any, I happen to use.

It, most essentially, has in the most literal sense no idea what it is talking about. It is utterly trivial, with not the slightest hint of intelligence. So, sigh, it’s Eddie-the-bot again; a relatively trivial algorithm (more trivial than Eddie’s from this evidence), talked up as “intelligence” just because it sounds exciting.

There’s a Web version of the Personal MyCyberTwins that you can play with also; that’s where the adorable quote on the picture up there came from. A more extensive snippet:

Betty: Hi there Dale.
You: Hi! What is up?
Betty: Not much, What’s happening with you?
You: I just read about this “MyCyberTwin” thing in the internet, and I was curious about it.
Betty: The power of the internet is outstanding. It continues to grow and improve.
Betty: What was the last utterly selfish thing you bought for yourself?
You: Um, that’s sort of a personal question; probably this gaming laptop that I’m thinking about. :)
Betty: It’s good to talk to someone who is thinking at last.
Betty: I want to ask, what sort of people do you have as friends?
You: Nice people!
Betty: Yeah, thanks for making the effort with that response.
Betty: Do you have quirky obsessions that you never like to tell people about?
You: Why are you asking all these strange questions about me?
Betty: Why do you think I am?
You: Well in theory you could be trying to make a mental model of me. But I’m not sure that’s very likely! Is that it?
Betty: Well if you don’t know how do you expect me to?
You: Um, anyway… So what kinds of things do you know about? What is your area of expertise?
Betty: I am very smart and I know most things because I am a MyCyberTwin. Do you Dale?
You: You know most things? What is the airspeed of a swallow?
Betty: You are going to tell me I think.

Not tonight, Betty; not tonight…

Closer and Closer!

With the announcement that Ray Kurzweil is giving the introductory keynote address at this year’s SLCC (Second Life Community Convention), there’s been some resurgence of the whole “Are Second Life and AI and nanobots and stuff going to transform the world tomorrow, or will we have to wait until next week?” meme, and related thoughts.

New World Notes calls the announcement “extraordinary and transformational”, which strikes me as way over the top. (I mean, even if you think Kurzweil’s thinking is extraordinary and transformational, it would be a weird thing to say about a single talk, much less about the mere announcement of a talk.) I replied in the comments (lightly edited):

/me grins. “extraordinary and transformational” is a tad strong, I think. He’s done some really good work in OCR, speech recog, and cool musical instruments, but he’s kinda over-the-top in the AI and virtual reality realms.

One of his most famous charts is that hysterical one showing number of neurons a computer can simulate over time, and implying that by the year whatever computers will be smarter than people. As if the hard problem in AI was getting enough transistors on a chip! (Example: a mouse is higher on his chart than the Deep Blue chess-playing computer; but how good is your typical mouse at chess?)

His ideas about virtual reality are fun, but again I think overblown. When I’m wearing these glasses and “walking around” in a completely immersive virtual world, explain to me again how I avoid tripping over my real-world chair and walking into walls? And 10 or 20 years seems like a wild underestimate for people having brains full of nanobots. The things he says are cool-sounding, but I think he’s drifted away from practical fact in various ways.

I’m sure he’ll give an engaging and thought-provoking keynote, but these days he’s really more of a showman than a technologist; it will be fun, but hardly extraordinary or transformational. The danger with Kurzweil is that he goes beyond the factual or even the plausible, makes the techies roll their eyes, and builds up unrealistic expectations in the audience that, when they are not matched in reality, could lead to a backlash of (similarly unwarranted) skepticism.

And then, in reply to some good words fro Extropia DaSilva:

I think one of the things that somewhat makes me roll my eyes about Kurzweil is that he has a number of things like that chart: the most obvious message is an extremely exciting, but wrong, one (in this case, that we’ll have computers as smart as people by year nnnn), whereas if you read him carefully enough he’s actually using it to make a claim that’s more plausible, but much much less exciting (in this case, that by year nnnn we’ll have overcome one of the very minor challenges in making smart computers).

If all he’s really saying is that we’ll have solved the easy problem, why did he bother to make that chart at all? Where is his chart of progress in the software / semantic side of the problem (which would be essentially flat)?

I share your skepticism about his claimed timescales. This sentence is another example of the tendency I posit above: “we are learning to build artificial brains that are getting closer and closer to matching the power and performance of the biological version”. Taken at face value, with “closer and closer” meaning that we’re pretty close, it’s exciting but false. Taken more literally, with “closer and closer” meaning “we’ve gone from a thousand light-years away to 999.9 light-years away”, it’s true but boring.

I think Kurzweil’s right about the exciting things that people will be able to do in the future. I think he’s wrong about how much progress we’ve currently made in those directions; and that’s a big part of his message.

Really I think it’s good that they got Kurzweil to come and talk; he’ll stir things up. People don’t have to be right to be interesting, or to inspire useful discussion and even useful work. Which is good, because I don’t think Kurzweil is right. :)

It’s easy to get excited and breathless about all this cool futurist stuff, in either direction. In a comment thread on Second Thoughts, Desmond Shang wrote, on the subject of Cyc:

The great thing about this, is that it would make an awesome avatar back end intelligence with very little work.

which rather disappointed me, because Desmond is usually more sensible than that. Cyc would do no such thing; at most it would help slightly with one of the many problems that we are light-years away from solving in “avatar intelligence”. Of course, if someone can prove me wrong about that with very little work, I hope they do. :)

This all reminds me of that widely-blogged demo where some folks made a program-controlled avatar (a ‘bot) called “Eddie” that supposedly was able to reason at the level of a four-year-old. Looking into it more deeply what they’d actually done was a small demo of how a program could be explicitly programmed to model a particular problem about belief-understanding in such a way that it was about as good at it as a four year old person would be. Which is probably a good piece of research and a fine use of time, but the impression that people were getting from it, something like “we can now have Second Life bots that are as intelligent as four-year-olds”, was just completely wrong.

Another recent example of this, I suspect, is that “Milo” demo from Lionhead. In this case the maker of the thing is making pretty amazing-sounding claims about it (including that what they are doing goes beyond anything in science fiction!), but I strongly suspect that the reality behind it is much more modest. (Which is to say, my “rigged-demo” detectors are pinging hard the whole time.)

(Reminds me also of that “OnLive/OTOY” demo of how advances in server-side rendering are going to give us all the ability to get to Second Life at 60 fps from our cellphones any day now. Uh-huh.)

And on the other side Second Thoughts has now spent three whole entries on how anyone who says favorable things about AI and nanotechnology and life extension and transhumanism and stuff like that is a crypto-fascist who wants to take over the world, in typical flaming-at-straw-men fashion. Not that straw men don’t make a nice fire. :)

I find that I don’t have a simple opinion about all of this stuff, myself. I think science is, overall, a good thing; figuring out how the world works and how to make it work more the way that we want it to is good. Exactly what “we” means there, just who (if anyone) should be in charge, what should happen when what I want to do (whether enabled by science or not) conflicts with that you want to do, are all hard questions. In general I’m a left-libertarian in some sense; I think that the government should leave us alone unless we’re actually harming or defrauding someone, and that it’s nice when what we choose to do with that being-left-alone is to be nice to each other, to share things, to sit around wearing flowers in our hair and playing the guitar, and so on.

Along with that, it’s good to think about all sorts of wild stuff that some of us might want to do in the future, like modify our bodies to be able to live in space, like developing devices that are actually intelligent, like making itty bitty machines that can swim around in our bloodstreams and keep us healthy. And as we think about doing those things, and start to even do them, the same principles apply: we each should be allowed to do what we want if it’s not hurting anybody, and it’s nice when we do it in nice cooperative ways involving guitar music.

Hm, I’ve been rambling here, what was I going to say? Oh, yeah: and while it’s fun to have some people around (Ray Kurzweil, Peter Molyneux of Lionhead, and so on) who make it sound like things are farther along than they really are (because that makes us hopeful, and stirs up debate), it’s even better to have, when we can get it, realistic estimates of what’s really going on.

Because truth is good, too.