FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Vanuatu, Independence Day, Morocco, Festival of the Throne

DevelopersVirtualWorld

I’ve been away for a while, and I promise you, it’s for a good reason.

I have no way to organize these thoughts right now, I’m just going to blurt them all out.


(What follows was originally written by (mostly) LionKimbro on a MoonEdit ting session)

Mattis, I’m going to start with you.

SecondLife. You have to see Second Life, it’s exactly the sort of medium that you are looking for.

Music, Mattis, music. Music is everywhere in Second Life, and I know you’re going to like it. It’s more like radio, but- I don’t know. It’s very much a social affair, and I think- you just have to see it.

So, Mattis, your mission in life, should you choose to accept it, is to figure out how to retrofit your computer so that you can do SecondLife, and then report back here on what you discover there!

I should probably have more words for you on this for later, but: Just start now. Do whatever it takes to do Second Life!

You need:

I hope you can get those things together, because you’re going to LOVE Second Life.

Now, let’s see, I have two places I want to go from here-

Other things:

Um,… Time to just pick one!

FreeSoftwareDevelopment? and the web as a VirtualWorld.

Well, let me start with: Folks, you have to see SecondLife. Actually, not just see it, you have to do it. You have to get in there, construct an avatar, and move around a bit.

The reason you have to see it is because this is what we’re going to do.

Futures:IntelDeveloperForum2005Keynote - Intel says that in ten years time, we’re going to have excellent speech recognition, and live facial recognition. I’ve read in Scientific American that developed voice synthesis is developing as well. And while we already have them right now, let’s not forget the bandwidth and 3D capabilities we will have. (Speech recog, face recog, and voice synth are just big mention, because we don’t even have them in popular easy-to-download software.)

What it amounts to is that when you scrunch your face up in a smile and a “ewww” face, your online avatar will do the same. If you stick out your tongue, your online avatar will stick out his or her tongue. 2015.

FacialExpression? will become an element of UserInterface control.

Pretty much any face you can make with your human face, your avatar will make.

Your avatar will probably be everywhere, as well. People are going to tire quickly of remaking their avatar whenever a new world comes about. I now understand the urgency with which many people have been pushing for VRML, and other 3D-model standards. We need to be able to quickly communicate models, animations, worlds, objects, etc., etc., from place to place. That’s a demand- people will want this. We just have to distribute this info.

It’s very exciting!

Speech synthesis will be used to mask your voice over with the voice of your character. I imagine that there’ll be some sort of work to register intonations in voice when you speak, and carry them over to the voice synthesis process as well. This is basically a speech transformation issue, using the vocal chords as a user interface as well.

Folks, we are squarely in the domain of TransHumanism here. This is clearly not your grandfather’s humanism. This is where people start to go: “Wait, why are we dedicating all these cycles to recreating voice? What’s wrong with my own voice?” Yes? We see the split here. The people who flock to the virtual worlds, pick up multiple bodies, voices, etc. This is the beginning of the post-humans.

Check out this wiring pattern:

Brain – Physical Hand – Avatar’s Face, see?

And we’re going to replace it with:

Brain – Physical Face – Avatar’s Face

Because- the connection between our brain and our physical face is much faster and intuitive, than overloading our already overloaded hand channel.

And for speaking, we’ll just speak. We can save the hands for controlling movement, flying.

Yeah, it totally makes sense, and we’re totally going to do it.

You can see right now- in Yahoo’s chat- they have a 2D avatar system. They have similar dress and expression work going on. It’s just very powerful, people like it a lot. It just makes sense. And, it makes sense that people don’t want to be cut off into neighborhoods, and that we’ll eventually collect it all together.

This suggests something- we’ve talked before about how technology starts out centralized, into one area, and then we figure out how to delegate it out, how to parcel out the responsibility and connections and what not. As these virtual worlds develop, we will expect to see integrated environments, and then to see them distributed out. So if we are interested in building a virtual world, we should not worry about external protocols and what not. We should only worry about manufacturing the virtual world. The code can be yucky, the code can be eww, as the various ravages of development life (such as reprioritization and bugfixing) strike the worlds of code. Yadda yadda yadda.

Now: Why might we be interested in building something like this? Why would we, free software developers, be interested in building a virtual world? Aren’t virtual worlds just places where people go to have bondage sex? (No: more on that later.)

Something you notice playing in Second Life is that it’s awefully hard to do anything that doesn’t focus on appearances. It doesn’t help much if, say, you want to write a document together with somebody online. It also doesn’t help much if say, you want to write some code and automatically put it into and pull it out of a subversion server. If you want to build and maintain an Futures:ArgumentGraph, you’re not going to get much assistance from the world. It’s made for its thing, and it doesn’t go too much further.

You would be better fit using a wiki, or a mailing list, (minding the LimitationsOfMailingLists,) or any other such things.

When you think about it, we’ve sort of made a Virtual World of the web. The web is sort of a very primitive Virtual World. It’s just not very well suited for seeing other people. We’ve had to manually force that into it.

It makes me think: What we could make, if we wanted to make a virtual world for developers, and for free software developers in particular, is something that is a VirtualWorld environment where web pages are “places,” and where it is made with constructing and collecting ideas in real-time with one another.

RayKurzweil? pointed out that a telephone creates something of a virtual reality environment. It’s just a purely auditory virtual reality. But a VirtualWorld none-the-less.

MoonEdit is also a VirtualWorld. It creates a world made of text and letters. We even have bodies! Namely, we have the color of our text. ;)

We can imagine a future where people work on text documents together, before we fully enter the Virtual world. When we work on documents together, we will see the 3D faces of the avatars of our co-workers on the right side of the screen, or wherever. As we work on the document, we will see the avatars of our co-workers talking, laughing, smiling, as we work on the document together.

We will focus on the letters, in the center of attention, and put it together. It’s conceivable that you could point with your finger to a place, and and people could see a target appearing to the place (like in Peek-a-boom :)).

We currently have a big problem switching from medium to medium. One low-tech developers environment would be to network existing mediums.

Let’s say you’re on IRC, and you need to work on a document with others. You would say “edit document foo,” and then it would open gobby automatically on everybody’s computer, and calibrate it to the proper page for editing. Right now, we have to talk for about 5 minutes before we can actually get going. The system should also help people download and install the software, if they don’t have it. It should say, “You need moonedit for this, can I download it for you, and get it running?” Yes, yes, yes.

And if a Skype conversation, or some other vehicle of IM appears, then it could also take care of all that initiation and connection. Yes, yes, yes.

Because, right now, SwitchingCost between mediums is so incredibly high. It takes like (or feels like) 10 minutes to just negotiate the switch to the next medium. The net result is that we have all these capabilities, and we just don’t even use them.

That suggests that this is a high-priority task, as far as mediums go: The ability to connect with these guys. Perhaps a protocol would be built, where you advertise the capabilities of addressing of your medium, and how you signal transfers, or something like that. I don’t know, I’m not thinking too much about it right now.

OR, it might be better to go with the totally integrated route. I hate that route, but I observe that it works a lot of the time. What it means is making a super-medium, that has it’s own instant messaging system, it’s own live document editing system, it’s own live voice system, it’s own web browsing (yikes) and editing system, etc., etc., all focused on live interactions.

VirianFlux?: Is it a reliable route? Different companies provide software to fill niches, competing with other companies in a general direction, and adding individual improvements, that are closed solutions. E.g: MSN Messenger and Yahoo Messenger having totally seperate encoding for webcam protocols. Noteworthy new trend: Google Talk uses Jabber, an open system for delivering IM’s.

Personally, I wouldn’t take that route, but it’s something we should watch: When the new system or framework or whatever comes out- which path did it take?

Is it a reliable route? I don’t know; It seems to work pretty well for MMORPGs, Kuro5hin, etc.,. There is very little interconnect between mediums right now, beyond RSS. RSS is like the only thing, and everyone thinks it’s God.

Kuro5hin is it’s own medium, with it’s own integrated voting and blogging system. The mechanics could have been generalized, and made connectable, but it would have cost Rusty so much to develop all of that stuff that it would never have been completed.

It takes forever to do this stuff. You can’t just write a nifty protocol, and then people use it. If that were the case, Local Names use would be widespread. In reality, it will probably take 2-5 more years before people feel that there’s a need for it, and use it, if they ever do. I have a hard time imagining it never being used, but it’s possible that when it comes into use, it won’t be called “Local Names.” Perhaps someone popular will reinvent it, not knowing that somethign like it already exists. Or, perhaps it will be a side-feature of some super-system for annotating links. For example, I can imagine that there would be a standardized link annotation format, and part of that system would be attaching a “short name” to the link. The link annotations would be grouped by community, or whatever (the equivalent of a “namespace” in local names,) and that would be the vehicle. I can easily imagine that happening, and have thought about going into general link annotation because of it.

Re: Competition vs. Cooperation:

I think that if it were cheaper to create software in a generalized way, rather than in an integrated way, that people would do it. I think that the temptation to categorize the problem entirely in a greed-vs-giving way is actually a mischaracterization, and a holdover of last centuries battles. I think we should look at Coases arguments, which are more based on structures of integration and collaboration, and we see that greed/vs/nurturing is just a dramatic theatre. It’s clear that MSN and Yahoo will compete, when they could just as well collaborate. But in the Free Software world, it’s not clear that it’s because of developers egos. Free Software developers have scarce resources, and actually taking the time to connect and coordinate is the dominant factor. In time, we will solve this problem, (and the developer’s virtual world may be part of it,) and we should suspect that once that problem is solved, that the Free Software world will totally mop up the proprietary worlds.

It just seems to be the way: Things happen first in the proprietary world first, and then they are figured out and understood and the knowledge and implementation distributes out to the free world. It’s a good balance, and it is as it should be. 18 year long software patents are dangerous to this process, though, and can unnecessarily put the world on hold for 14 years. A patent duration of 5-10 years would be much better, though. VirianFlux?: or a few years maybe, that’s just me, maybe totally excluding open source code from the patents too (although this could well lead to developers squashing rivals through open sourcing their code).

Regardless, let’s see, where were we…

…A virtual environment for software developers, or for people working on things like papers.

This would include webpage annotation. It would have web presence- as you look at a page, you can see other people looking at it with you. You could see where the other person is, on the page. It’s conceivable that the technology may exist to figure out what you are looking at, specificly, and to mark it somehow. Perhaps you could say, “Look here,” and whatever you are looking at, is flagged.

It’s very important that we can see people who are doing things. If you are writing code, you should be able to see the person writing code. You should be able to see what file they are working on. We should (God I hope so, there’s no reason we couldn’t) have a way to spatially arrange code. There are often very good spatial relationships between parts of code after all. So, you should be able to see that somebody is developing, what part of the file that they are working on, where in the file that they are working on it, and you should be able to hop in on the side and “bother” the developer, or add help, or whatever.

We need to be able to perform all kinds of manipulations of the codebase; We should be able to look into the past, check how things were, and we need visual technique for showing that other people are doing these things as well.

VirianFlux?: not just the literal place but the concept too, I hope

LionKimbro: (?) Which concept?

VirianFlux?: where programing something, what particular part they are working on, or when writing a play etc, the general task in hand, not necesseraly which paragraph?

LionKimbro: oh! I see; Yes, what part they are doing- what phase in development, or whatever. yes?

VirianFlux?: yup, like a shared to do list, except more intuitive

So, we should imagine that we have these sorts of things, and figure from there about what we can do.

We should look at what we do now, and say: “This is similar to life before there was an Internet.” You know- unspeakably primitive. “They were just beating rocks together, back then.”

That is, we should be aggressive as we persue this. ;) None of this: “I can’t imagine why you would need this.”

With these things, we’ll be able to see just who’s doing what, and we’ll have context, and it’ll be easy to switch mediums, and switch to doing a drawing, or just- whatever. It’ll take work to figure out what works and what doesn’t, I can imagine it taking 15-20 years before this vision becomes real. But, it’s something we should work towards, and hold in our imagination.

Okay, so, time to talk about another thing.

Links

Developers virtual world (warning, fairly geeky) : Hmm, what I would like would be a CVS (or subversion) tree that can be viewed as a wiki (with history, diffs, etc.) - maybe that already exists - heck, maybe even something that can be edited, online, like a wiki, with either controlled access rights, or only the ability to edit comments and documentation. That would be neat, and that’s somewhere where we can excpect some progress. Imagine having Emacs or Idle or Visual C++ in “collaborative editor” mode. Or having the wikipedia article / discussion pages being equivalent to code / documentation (though “interface documentation”, “code”, and “discussion” would be more likely) - where you could be reading through the code and adding comments or discussion, trying to learn, to explain, to make things more intuitive or readable … that would change the programming community.

Maybe it wouldn’t work, maybe there would be too much code and too little discussion, maybe programming implies too much change in too many places for it to work as a wiki. But hey, I’d like to be able to casually browse living code. Especially hyperlinked living code.

Virtual worlds : hey, I’m still working on my online game :) I sometimes think that with a bit of tweaking, it could also be called a visual language collaborative editor :) There are a lot of free software MMORPG projects, but they have a lot of vaporware, and I don’t know if many focus on the social collaboration side of things. I’m not sure it’s that needed in faxt - at least, not yet. More down-to earth things like shared browsing or automating switching between mediums are more useful. Virtual reality is just a nifty glimpse into the future, into what things will be like when everything works together (and I find it hard to imagine why it would not, eventually).

Lion: I hear you on automated switching between mediums, but if everyone had cameras and sufficient bandwidth, I don’t see people using avatars to talk face-to-face; I’d see them just talking face-to-face. Except maybe if they were embarassed about their surroundings, etc.

Emile: See also WikiAsSourceControlRepository. CommunityProgrammableWiki and http://www.nooranch.com/synaesmedia/beach/wiki.cgi?WikiDevelopmentEnvironment and (I think InfiniteMonkey?) are some WikiEngine projects that aim to achieve this.

(Vision of virtual worlds: Mutual visibility is essential.)

When I envision a virtual world for developers, one of the most important things (I think) is visibility of the actions of others. If a developer performs a commit to a database, the commit is visible, in real-time. Furthermore, by looking at the developer, (in some way- not necessarily expecting a human body to be at the other end- the “body” could just be an icon or a text representation,) you can identify what work the developeris performing.

So, if I look at Bayle, and Bayle’s working on a paper at the moment, I should be able to figure out that he’s working on a paper, and furthermore, I should be able to identify just what paper it is, and what part he’s writing, as he writes it.

If Bayle is browsing the web, I should be able to see that he’s browsing the web, and what pages he’s been visiting, and the like.

Ideally, if Bayle appears to be concentrating, that should be made known. And if Bayle appears to be relaxed and just casually looking around, that should be made known. And if Bayle is laughing, having a good time, that too should be visible.

This way, I can determine if it’s a good or bad time to interrupt him, without requiring him to manually set a bit somewhere whenever he transitions states.

Bayle needs to be able to point to some code, (perhaps by the user interface of a touch screen,) and speak, “Well, this part right here, is what I’m thinking about right now;..” On the other side, I need to know that he pointed to it with his hand, and to hear his voice.

(Our personal involvement.)

I’m not actually advocating that we personally do this work. I just think it’s something we-as-programmers should take an interest in and hold in mind. When we talk with people, we should talk about these ideas. I have not heard them as a collected vision before.

We are busy with other SuperProjects right now.

(The Masquerade Ball)

Bayle: I have been thinking about whether people would use real images, or avatar images, a lot. I would have argued, like you just did, for real images, were it a week ago. But, I’m questioning that again, now.

I can’t yet make a good argument for avatars, only a weak one and fragments.

Those fragments include:

  • People desire to look different. (Possible escalation of appearances.)
  • More than looking different, people (I believe) feel different than their appearances. Self-image is abstract, but bodily appearances are not.
  • The virtual environment may become more and more important to people. The current feeling of trust that is associated with the presence of physical form may fall away. InternetBonding may lead to increasing trust of the online form, rather than the offline form.
  • Avatars grant 3d vision in imaginary situations. (That said, the avatars can be reproductions of true-to-life-3D forms. But: why?) This is important if, say, developers see each other interacting over virtual machinery. You cannot render the appearance of manipulations if you do not have a body, a puppet, to perform the manipulation.
  • People will be in embarassing situations, and seek refuge from them.
  • The machinery of future interface may be embarassing to look at. (i.e., you may be a brain in a jar, or may be wearing dots on your face for facial recognition, or whatever.)

The arguments on the other side are very good, though:

  • It’s computationally expensive to construct and maintain avatars. For what benefit?
  • We’re doing real work here, and don’t want to communicate a Masquerade Ball. The UseRealName reasons.

I’ve been reading a manga, called “Ghost in the Shell.” It’s not the original manga series, rather, it’s a more recent one, made in the last few years, I think. (I discriminate between the two, because it’s very different than the older one.)

It focuses a lot on the virtual environment, and about half the story takes place in the virtual world.

If code can be usefully spatially positioned, and we are socially working on code, then we will need some marker to represent our flighting interest in a particular point of time at a particular position in the code. We can call this marker, “the body.” If the visualization is sophisticated, then the body (or “marker”) will be anything. It will be something that is used to quickly identify the reader or manipulator- the person who is reading or writing that particular piece of code, at that particular point in time. It makes sense to believe that the marker would be consistent across scenarios, and thus be the body of the reader/manipulator.

The only question in my mind is whether it will be a less-than-faithful representation of the person’s material body, or whether it will be an idealized avatar, truer to a person’s self-concept.

It is highly unlikely to me that it will be a faithful representation of the person’s material body; We don’t even do that here. (Rather, we’ve taken snapshots, and discriminated between snapshots, selecting for one that we like, and feel is truer to our personal self-concept.)

I strongly suspect that, over time, the idealized self-concept will take over, and that the material body will be increasingly devalued. That said, the process may take several decades.

Disclaimer added later: actually, I do find it rather arrogant (stupid?) to explain/argue/comment when someone obviously gave the idea of LocalNames quite a bit of thought… But anyway. It’s not the first time it happens on the Web. :) Older comment follows: I think Lion talks about things that are a lot like URIs. We have http://, we have mailto:, we have news:// That’s all old stuff. I’ve heard there is something like that for IRC too, there is ed2k:// There are also interwiki links like [[wikipedia:wikipedia]] But it would be much better if

  1. these could be defined more easily, so that we can have person:, project:, day:15.09.2005, etc. Then it should be possible to
  2. create links easily in any text field (just by adding some [[square_brackets?]]), so that you can have them in e-mail, in ICQ, in IRC, in the description of your calendar entry, etc. Everywhere there is text there should be links. And it should be easy to
  3. launch other applications. We had something like that with command lines so that you can do cool things like run programs on particular files, pipe stuff, etc. But without links we can’t really use even that existing functionality. If we had links like that, I could easily link to an e-mail message from here, to my shared calendar, to a person, etc. And that would not simply open a particular webpage like we can do today, but it would start a context-sensitive action. If I am on my computer, a person: like would open the address book entry or social networking entry. When someone else opens it, it may open an encyclopedia or an e-mail client.

Currently to have this you need the app to be on the web and you need it to be quite open. For example, it’s easy to blog Flickr photos or to link to blogs in comments to photos. For a person who only works with hip www (web 2.0) applications, the problem is already solved. But if you use applications in addition to services, it is not that easy simply because there is no linking mechanism.

I am not sure how this will go. There is some interesting potential in WinFX?. If you treat every object as a database entry, you open the potential to link to it. But I don’t see developers making the next step - realising that linking is a HUGE concept by itself and that it’s necessary not just for some practical reasons, but for ideological reasons - you need to be able to point to any virtual object, action, moment, etc., no matter what it is. Right now there is no simple way to do it. The problem is that there is no one responsible for it. No one feels it his job to implement it. And ideally it would be a separate entity that sits between and decides how to interpret the link, so we probably need some of the big players to wise up… That would take time…

I have been a very active member of the SecondLife community for the past couple months. Hence the silence on my blog. But I wanted to make a few comments here on various topics mentioned.

On Avatars: I think avatars are a good idea. It makes people more recognizable at further distances and they are cheaper to make something that looks different enough for recognition. The reason faces would be a good idea would be to intergrate into our wonderful facial recognition circuits. However, those circuits need a very realistic image to kick into gear. I’d rather be looking at many dissimilar avatars that I can just tell they are different. Also, the avatars can be much less complicated than a human body. I can imagine some people who would just be simple glowing spheres that change brightness depending on attention.

On Collaboration: There is plenty of collaboration in SecondLife already. It just isn’t appaerent upon first glance. Building objects in the game is very much a social project. They have huge sandboxes where people spend time just building things and, more importantly, watching other build, helping them get things lined up and so on. It’s not a direct collaboration on the same things, but it’s very efficient and I’ve seen some wonderful stuff come out of those kinds of collaborations.

On Other Things: I am really enjoying playing SecondLife. The scripting language leaves much to be desired, the modeling interface has other issues, I could go on about the variety of technical problems I’ve run into. But the social group is well above par. I spend hours discussing a wide variety of topics and there are always new people showing up. The game is before it’s time and so are most of the players as well. If you need a guide to the world, I’m on in the evenings PST as TheCrypto? Doctorow.

Lion:

I’ve been reading a manga, called “Ghost in the Shell.” It’s not the original manga series, rather, it’s a more recent one, made in the last few years,

There’s a new Ghost in the Shell manga? Cool! I assume I have the old one. Where do I get the new one? How can I verify that I have the old one and not the new one?

on avatars: I’m convinced that avatars are needed for VirtualRealityCollaborativeEnvironment?s. I was only questioning whether if you were having a conference with others you would prefer to have it with avatars rather than videoconferencing. After reading your arguments here and on the page SecondLife, I still think I’d prefer videoconferencing, but you did present some interesting advantages for avatars (I’m most swayed by “people’s expressions are hard to interpret” and “people want to have their awesome avatars, not their ugly bodies”), so maybe I’m wrong.

On the “people want to have their awesome avatars, not their ugly bodies” front, point #1 of the article at http://www.pointlesswasteoftime.com/games/wowworld.html makes that point better than I ever could.

awesome!

Bayle: There was a US-style comic called “Ghost in the Shell 2: Man-machine Interface” published by Dark Horse Comics in 2003; all that lovely Shirow goodness! There’s also been more anime as well (in addition to the two movies); I’ve only seen the first season of that, and really liked it. --FredDrake

See my comment in the SecondLife page. Much of that comment could be relevant to DevelopersVirtualWorld.

Hello, I’m the guy Sam knows who works on InterReality . One of our goals is to create an open and flexible platform so that you can do stuff like this with fewer constraints imposed. We want to link 3D spaces in to other systems (e.g. web systems), and to incorporate 2d graphics and text, diagramming, and metadata. The InterReality technology (VOS) is all about how objects relate to each other in different contexts. Want me to explain more about what InterReality is over on that page, and you can respond with questions about how to do what you want to do, Lion (and others)? (I did notice that some of the stuff on IsCroquetSecondLife also applies to InterReality as well as Croquet.)

Reed,

  1. Which stuff on IsCroquetSecondLife also applies to InterReality?
  2. How do we try out InterReality? I’m interested to try it. Seems like it could be a cool way to augment voice conferences with VisualLanguage.

Update to number 2: reading http://interreality.org/static/docs/manual-html/using.html right now…

This week I’ve stumbled across 2 different manga for developers:

  • Linux: "Ubunchu! The Ubuntu Manga is now in English"
  • databases: “The Manga Guide to Databases” 2008. I heard about it from Bruce Eckel, and I see that Cory Doctorow briefly mentions it and says “I sure hope it’s the start of a trend. I want a manga guide to supersymmetry, the surplus labor theory of value, tensor calculus and many other elusive concepts.”.

I see the VisualLanguage of these manga as a small step away from stiff, dry, boring text normally written about these topics, towards Lion’s vision of a DevelopersVirtualWorld.

It’s on my wish-list! I’ll buy it at some point.

Define external redirect: SharedBrowser TheCrypto RayKurzweil FreeSoftwareDevelopment VirianFlux FacialExpression WinFX VirtualRealityCollaborativeEnvironment DirectX square brackets

EditNearLinks: WikiAsSourceControlRepository WikiEngine InfiniteMonkey SwitchingCost UserInterface

Languages: