FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Benin, National Day, Switzerland, Foundation of the Swiss Confederation

OverHear

I have an idea for a new sort of communications medium. I think this sort of medium will be used all over the place, in the future.

I don’t have time to make up a good name for it, so I’m just going to call it “OverHear,” because one of the features is that it’s easy to “Overhear” things by this system, and get in on it.

This system is applicable to IRC, IM, and blogging. It’s a OneBigSoup idea. I think it resolves some of the major problems we have in Blogging too. Thinking about it now, I believe it’s even applicable to event systems, machines- the robots.

I’m talking too much about this without giving you the actual basic idea- so let’s do that first.

Conversation Fields

Imagine that there’s a “warp space field” around every one of us.

In Crest Of the Star (see geo cities dot com /Tokyo/Shrine/4777/Seikai/seikai.html), when spaceships jump into hyperspace, they carry a bubble of normal-space around them. Before they encounter enemy spaceships in hyperspace, they sense their far-away normal-space. When their normal-spaces touch and merge, they can actually start firing missiles at each other.

The “Conversational Field” is something like that. The “IRC world” or the “World at Large” (basically) is like Hyperspace. It’s not really something you interact in, really. But you have a conversational space around you, where you can say things and hear things.

And the idea is this: When you are talking with someone, your conversational fields merge.

So, say before we start talking, it’s me and AlexSchroeder.

Okay, I send an IM to AlexSchroeder, saying, “How are you doing? I see you just logged in.” (This is like Instant Messaging Presence, so that you can see who’s logged in, and who isn’t- that kind of thing.)

And then if AlexSchroeder responds, “I’m doing fine; It’s good to see you. Did you see the new article I wrote?” … then our conversational spaces have “merged” into one.

Brief aside: Blogs

Now, let’s think for a moment about Blogs. We’re switching from the IM/IRC world for a moment- keep it in mind, but let’s think blogs for a moment.

What’s a big problem in blogs?

People are having whole-on conversations in blogs, but it’s very fragmented from the outside. Yes? You see it?

Like, you see one person’s voice in their blog, and their actually conversing with 3 other people. But you have to mentally reconstruct the whole thing. TrackBack sorta helps, but only in a very limited way. It’s one directional, and it only points to one person really, making these tree shaped things. Good for crediting sources, and watching news ripple out, but poor for holding an actual conversation.

If blogs could evidence their conversational fields, then we wouldn’t be having this problem: You could see the whole conversation, with each participants part.

In fact, the whole “I’m a blogger, posting a diary entry,” to “I’m a blogger, participating in a conversation,” to “I’m in IRC, participating in a conversation” set of category bins- the divisions all start to melt away. Right? You see it? If not, that’s fine; I should probably get back to talking about the basic idea again.

Back to the Conversation Fields

So, two people are talking in their conversational field.

Now, who cares? Why introduce this idea of the conversational field?

Because, what you do is, you subscribe to OverHear people.

You say, “I’m interested in AlexSchroeder, MattisManzel, ChristopherDucamp?, BayleShanks, yadda yadda yadda…”

You list a big long line of people who you’d like to “OverHear.” And basically that means that you’re listening to their conversational spaces.

(Brief aside: See how this carries over into the IntComm:EventSystem? ? Every process in a machine would have a conversational space as well; Programs could note the transactions between other programs and people. We were already modeling every process as having it’s own bus, it’s own event distribution center. And we were already modeling them as listening to other processes, and having presence…)

So, let’s say MattisManzel is “subscribed” to LionKimbro. If I start having a conversation with AlexSchroeder, he can become aware of it, by some machinery. Maybe a light lights up on both of our names. Maybe our words are merged into a stream. Who knows, I’m not worried about the particulars of the UserInterface at this point.

So, anyways, MattisManzel can note, “Oh! That’s interesting!” And he can decide to jump on in as well.

Or maybe there’s a “live indexing” bot that we’ve absorbed into our conversational fields, and it starts to note our conversations, and then people find our conversation via it. (See the ThinkBot.)

We’ll talk about ways of finding conversations, organizing “chat rooms,” implications for privacy and the CommBot, etc., etc., in a moment.) At any rate.

So say MattisManzel jumps in. Now we’re all three talking, and our conversational fields have merged.

Now Mattis has a friend, and Mattis’ friend is listening to our conversation.

Or maybe not; Maybe he says, “This isn’t interesting to me, I’m just going to ignore or tell the software to ignore Mattis’ little world for 30 minutes.”

Yes? Is this idea of conversational fields clear?

Attention needs to be paid as to how we join and take apart these conversational fields. That’s not my principle idea though right now- I just want to communciate this idea that we’d have them, that they’d join and break apart, and stuff like that.

Of course, you could have two conversational fields at a time. You could have purely private conversational fields, or conversational fields with group-alignment, privacy, stuff like that. So if your friends and you want your private parallel space for talking, or something like that, that that’s okay. So that you can do that. And of course, you could send a one-message one-target message. Of course. There’s no questions about that.

But I think predominantly, people don’t like having their attention split over more than 2 conversations, and even active participation in 2 conversations is a bit iffy.

Passive participation- well, we can lurk on a ton of channels. I’ve never sought or found a limit, but it’s got to be at least 5 or 6. I’ve listened in on 10 channels at a time before. But anyways.

So we have these spaces.

Whence Chat Rooms?!

Where did the chat rooms go?

In IRC, we have chat rooms. You want to talk about Wiki, you go to #wiki. You want to talk OneBigSoup, you go to #onebigsoup.

But what I’ve just described with Conversational Spaces, with “Conversation Fields,” or whatever- it sounds much more like IM, where it’s just a sea of individual people, right?

Sure, we have “groups” where fields merge, temporarily, but then it goes back to the “sea of individuals” model of IM.

But, it’s no doubt useful to have “Chat rooms,” right?

After all, we like to talk about topics, not just with individuals and groups, right?

(Note that this is probably good material for another page: TopicsPeopleGroups?. It’s a pattern we’ve seen recurring for a while, now.)

I’m not sure how to model the “Chat room,” really, under the OverHear or ConversationField model, but here’s how I think it would work:

Instead of a “Chat Room,” you’d have a “Peg.”

I’m calling it a “Peg,” because it’s not closed like a Chat Room.

The “Peg” is a non-human, non-bot, (I guess I should say “non-participant,”) Conversational Field. That is, it’s a thing in the chat world that, independent of any particular activity, forms a Conversational Field with nothing in it.

You can imagine it like a piece of paper floating out in hyperspace, with it’s own normal-space bubble around it. Written on the piece of paper is the name of a subject, or something.

It’s called a “Peg” here.

And unlike “participants,” it doesn’t join a chat room. Rather, you summon it into a conversational field.

(Yeah, I’m worried about spammers too- perhaps there is some sort of permissions, white listing, whatever system, around this “Peg.”)

So, let’s say that you’re interested in Wiki.

Now, you and a friend start talking about Wiki, right? AlexSchroeder, LionKimbro- the two of us are talking about wiki.

And we go, “Oh, we’re talking about Wiki- we should bring in #wiki.” We could name the Pegs “#this” or “#that,” to remember the good ol’ days of IRC, and not feel so uncomfortable about the whole thing, right?

So we summon the peg #wiki, and now our conversation is in #wiki’s conversational space.

Now, how do people learn about our conversation?

Well, they’ve, themselves, pegged the #wiki peg. Yes? See? (In reality, it’s probably got a big long GUID unique identifier, so that there could be a few #wiki pegs, or whatever.) But anyways, so we’ve got the #wiki pegged, and everyone who’s OverHear-ing the #wiki peg’s conversational space, now hears what we are saying. Yes? Right?

Brief aside: ProjectSpace

There’s a tie-in to ProjectSpace, here. It is likely that the #wiki Peg will need to split in 2, or 3, or 4 some times. Like in an giant OpenSpace meeting, where they bifricate, divide, spread all over the place, right? So, similarly, having all the worlds conversations in one “super-space,” OneBigSoup,- we will need to be able to hold different conversations on the same exact thing. So we want to be able to network our Pegs into a ProjectSpaceNetwork type thing, like WikiNodes. Right?

Back to the Pegs

So, our Pegs- these are what we were calling “Chat Rooms” before. And they’re networked, so that if you split, people who are just casual listeners are now automatically following two pegs, rather than one, and when they recombine, their OverHear-ing goes back to overhearing just one conversational field.

Now I believe I have described the basic idea, and the rest is just details and applications in different domains.

Blogs

Now, blogs, in a way, are sort of like this: They’re conversational fields which can’t merge.

This is advantageous in a way.

And, HEY! Notice this!

A person could “micro-blog” by just speaking into their personal conversational field, right?

No need to actually start and initiate a conversation with anyone-

You can just speak out loud into the void.

When this concept merges into VOIP (VoiceOverInternetProtocol), then you can literally just speak into the void, and anyone who’s subscribed to your conversational field- they will just instantly hear what you are saying.

And, if they feel like responding, can merge their conversational field with your own, and you two are having a conversation. Of course, if one of HER friends was listening to HER field, and find that they have an interest in the conversation that you two are having, her friend can join in the conversation as well. Or, just kick back and listen. Or silence it for a while. I imagine we’d eventually have Mattis’ beloved sliders, so you could adjust the volumes on all the conversations. (WOW! “The Whole World is Watching” suddenly gets new life. It’d be just like being in Cerebro, for those who saw it.)

But see, anyways: The whole: “A blog is something where you write out loud, and people listen,” thing- this concept includes both that, AND it includes the whole we’re talking back and forth thing that IM does. AND it does the whole we’re talking in a group thing that comes with IRC chat rooms.

And it them all transparently.

Beyond Conversation

Ultimately, you want to carry this beyond just conversations.

That is, you introduce ActivityAwareness? into the conversational field.

This is a parallel field: You have two fields around you, actually. When you are in a conversation with someone, their two fields, and your two fields, merge. We can imagine that one is yellow, and one is blue. Two “auras,” if you will.

One is the one that you know and love, the one we’ve been talking about all along.

The second one is a machine field. This is where your robots talk in their robot language. You don’t listen to this field, you don’t talk over it. Rather, machines do it for you, and you see the results of their chatter.

For example, say, while talking with someone, you go to a particular web page. Now a note goes into your machine field saying, “USER SUCH-AND-SUCH LOOKED-AT-WEB-PAGE http://blahblahblah.” It could be XML, it could be whatever. Just some form of machine chatter that humans don’t speak, right? (Until we get to the LUI, or whatever, but that’s another story entirely.) So the message went into your machine field, or a channel in your machine field, or whatever, saying what you are doing. And then if people want to see what you are doing, or if they are in a conversation with you, or whatever, then they can see “oh, he’s looking at this web page now.” Your computer would figure out what to do with it. Perhaps, in the background, open up a web browser and show you what your collaborator was looking at, or scroll a message at the bottom of the screen with a URL- whatever, something, some intelligent response.

And then as groups form, you see the activity of all members going by, together. Right, so you have that activity awareness.

Of course, if you’re curious what someone’s doing, you can just OverHear it. (Provided that they don’t have a privacy mode set, or holding down the privacy key, or not holding down the public key, or whatever wackiness TheHumaneInterface would have you do.)

You can say, “What page’s Lion looking at?” (“Oh, that’s some really good porn. I’m going to have to remember asstr.org.”) Or whatever.

When we have free SubEthaEdit going (see IntComm:SubPathetaEditLinks? if you’re interested in tracking this,) we can watch people writing wiki pages in real time. “Oh, Mattis is writing something. Let’s look over his shoulder, and see what it is. Oh, that’s really interesting! Let’s break the writing session and talk about it, and then we can work on it together.”

Hive Mind

Yes, yes, you see?

We’re building the HiveMind.

There is no question in my mind: This is the way we will be communicating in the near future. This fragmentation between IM, IRC, all this stuff- This is the next step.

You can integrate the old systems in with this. You can build this system on top of the older IM and IRC systems. It will all work. It’s beautiful. It’s great.

This is the way. After seeing the Conversational Spaces, it is clear to me the deficiencies in the older systems, and how this connects them together.

We’re building the HiveMind.

People have asked, “What is the HiveMind, how will we know it, what is it.” You’ll just see it. You’ll just open up the ports of your computer, and you’ll hear people talking to the wind. You’ll see their text. You’ll ask Navi, “Who is talking about the Singularity,” and it’ll show you a bunch of conversations, and you’ll start listening in on them.

We will network our conversations. You understand that, right? We will organize and network our conversations. We will keep notes and minutes. We will chart the space. This is the perpetual jam session, the hum and beat of the HiveMind. This will all happen. Blog posts will just be the intermittant base drum to the continuous melody singing in our ears. The instant messages will be the individual notes.

The Internet will Sing.

Now we just have to build the thing.

Discussion

Thinking back, maybe this should be called:

  • ConversationBubble?
  • ConversationSpace?
  • TalkBubble?
  • ActivitySphere?
  • SphereOfActivity?
  • BubbleOfTalk?
  • BubbleOfActivity?

Not sure what. :)

I also wanted to say- I forgot to say this in the main article-

I wanted to say:

Blog posts are just prepared essays thrown in this space.

You have a conversational space made of long essays.

It may be the case that the MailingList is obsolete, by the ConversationSpace?. The only reason you’d keep it around, is for privacy. And you could model privacy into the ConversationSpace? concept as well- I think I mentioned this briefly- You’d have it so that if you are talking in the privacy mode of a particular group, then only the members of the group see your expressions there, and only members of the group can join to talk in that particular privacy hyperspace.

So, okay, anyways- that’s all. Blogs are big long essays being posted into the void, and if you get into a conversation by blog, then people see both sides of it because for those exchanges, the spaces of the participants have merged together.

That’s it. Go back to what you were doing. :)

I feel the needed technology infrastructure for that is PublishSubscribe?. And when that is build I think it will open way for many more. It quite close to what I was thinking when writing Brudnopis:SocialRouting as well - it feels like it is extending the same basic technology in different direction.

I’m not sure if I understand you right, when you say “Publish Subscribe;” Are you sure it isn’t the same thing as an IntComm:EventSystem??

I wrote (and am probably going to abandon) OneBigSoup:DingDing; And there are efforts like IntComm:ModPubSub? and IntComm:PubSub?. I eventually abandoned my effort, though on OneBigSoup:DingDing, because I came to figure it would be better to just use IRC or Jabber (JEP-0060) as the event subscription & distribution system.

That is, it feels to me that the necessary systems are in place for subscription and distribution, right now.

People don’t usually use IRC as an event distribution system, but I think it’s perfect- It’s already been used a few times. See the IrcWhiteBoard? idea, which has actually been implemented. It works by sending distributing messages via IRC.

I originally thought it was sort of rediculous to use IRC. But then, as I was writing security and privacy and administration controls into DingDing?, I realized, “Wait! This is rediculous! IRC has at least 15 years of history and wars and hacking and security put into it. There’s no freakin’ way I could ever possibly hope to meet that. It distributes messages, people subscribe to channels. It supports TransparentMessaging. It has very sophisticated and capable distributed administration system. What more could I want?

Publish: Type a line (or more) of text.

Subscribe: Join a channel.

So, I stopped writing DingDing? v5, right then and there. I was about 50% done with the base code. I couldn’t work on it any more: “IRC is such a better way of doing this. Why am I clubsily rewriting what already exists?”

DingDing? 2 (DingDing? v5) will be just an XmlRpc subscription or SOAP API that just runs on top of either an IRC server, or Jabber JEP 60 node. That is: To publish, you send a message to an XML-RPC server. The XML-RPC server uses a bot to post the message to a JEP60 node or IRC channel, in a machine structured format. Then to subscribe, you send different XML-RPC signals, describing your subscription query. (see QuerySubscription?.) When events post to the IRC channel or Jabber node that match your query, the bot in turn places an XML-RPC notice.

You could just as well go straight to IRC or Jabber, ignoring the clumbsy mechanism of XML-RPC calls. (Bots are just a little harder to manage than XML-RPC calls, hence the Adaptor.)

So, I feel the machinery to get a basic version of OverHear running is already in place. There are a number of publish-subscribe systems out there. We can write this, understand the problem domain, and then improve it later on.

Answering the question about IntComm:EventSystem? and PublisSubscribe? - perhaps it is the same thing, but Publish Subscribe seems to be a bit more recognized vocabulary, being introduced in the Programming Patterns literature. With my note I just wanted to spread that vocabulary so that we could find a common language with the software people. Perhaps that is ripe for refactoring.

Oh! I see what you’re saying. Here’s C2:PublishSubscribeModel. God I wish we had TopicNode, so we could link all this stuff together.

In the game world I lived in, we always called it an “IntComm:EventSystem?”. But I like the PatternLanguage movement. Okay; We’ll call it Publish Subscribe.

I think this is a very cool idea. Here’s my initial way of looking at it. I break the idea into a few components:

  • ConversationalField?s, as described above
  • the idea of conversations as a class of entity unto itself; this is how blogs, IM, and IRC chat rooms are all brought together into a unified framework (i.e. they are all special instances of “conversations”, with the differences lying in how the conversation is connected to other things (like people)
  • PublishSubscribe? for ConversationalField?s, i.e. the “OverHear” functionality
  • OnlinePresenceForConversations?, i.e. just as in IM you can get info about the “onlineness”/status of your buddies (“person”-typed entities), in this framework you can get the status of conversations)
  • Conversations as nodes in a graph: maybe not explicitly mentioned above, but this might be one way to technically implement ConversationalField?s.

More thoughts:

So, I’m envisioning ConversationalField?s as a new paradigm/metaphor for interaction. The other bullet points are more technical.

Conversations as a type of object: To me this is one and the same as ThreadsML. In order to treat conversations as objects, and to use the same framework for referring to conversations in blogs, IM, and chat rooms, we need to develop two types of standards; 1) standards for referencing a particular conversation (i.e. the “meta-info” that we attach to a conversation; “conversation 11131, called “my favorite conversation”, equals the content of #wiki on jul 15, 2006 from 3pm to 4:20pm) (note that this part must be flexible enough to designate subsets of a given conversations as “conversations”, too), and 2) interchange standard for transmitting the content of conversations (i.e. “this post on this blog is a reply to that post on that other blog”).

OnlinePresenceForConversations?: Maybe we should look into Jabber and see if the “status” of users can be extended arbitrarily; if so, maybe we could register conversations as “virtual people” within Jabber

PublishSubscribe? for ConversationalField?s: If we have an IM-like mechanism for doing OnlinePresenceForConversations?, then technologically speaking the PublishSubscribe? mechanism would be like a buddy list

Conversations as nodes in a graph:

A person joining a conversation is like creating a special kind of link between a “person” node and a “conversation” node. There could be multiple types of links, representing people who are speaking versus invisible lurkers. Blogs are conversations with only one person attached via a “speaking” link. IM is a “conversation” node with two “people” nodes connected to it, both by “speaking” links. Chat rooms are a star topology; one “conversation” in the center, with lots of people connected to it via “speaking” links.
Alternately, links could be directional; a “speaker” is someone with two links to a conversation node, an arrow going from the speaker to the conversation, and an arrow going from the conversation to the speaker. “lurkers” (or “listeners”) have only an arrow going from the conversation to the lurker.
Alternately, instead of having “person” and “conversation” be fundamentally different, we could have any one type of entity; “conversational field”. This seems more elegant, and may be the better solution. The fact that some conversational fields happen to be “Lion Kimbro’s conversational field”, and some are “the #wiki conversational field” might be immaterial to the basic operations of merging and splitting conversational fields.
As for merging and splitting; there’s various ways to represent this. One way is to have a type of link called “joined”; if conversation A is joined to conversation B, and B is joined to C, and B is also joined to D, then we would consider A,B,C, and D to all be one big “conversational field”.
Alternately, we could do everything in terms of directional or read / write links. We could represent “joined” links by having both read and write links / links in both directions.

Talking about conversations in space you should read about the Holocene Chat: http://www.corante.com/getreal/archives/006188.html . It is quite close - as I understand it, it uses spacial proximity on screen very similar to phisical space. You can talk with some one and if someone else is close he would hear your conversation and he can join in, but he can as well have his own conversation in the centre of his conscious attention.

Bayle: I’ve been thinking about some of those ideas, and some of those are new to me.

I mean to draw some UI mock-ups, and to draw some technical diagrams describing possible layouts. I love the graph idea, and I’ll think about it on the way to work.

I’d like to say more, but I’ve got to go.

I think this is going to be my next big project.

Zbigniew: I’ve read about ChatCircles and stuff like that.

I think that spatial positioning and avatars and stuff like that have tremendous communicative possibility. There’s so much you can say by subtle positioning, and changing your avatar’s expression, and stuff like that.

OverHear isn’t really about that, so much- it’s more about how we participate in discussions, how we find discussions, stuff like that.

In OverHear, for instance, it’s sort of like having an RSS feed for each of your friends’ mouth. They say something, and regardless of where they are, what chat room they are in, etc., etc.,.- you hear them. (The only prohibitions are privacy and stuff like that- you won’t hear what they don’t want you to hear.)

Or you could be listening to several pegs’ fields: Whether they represent what we would think of now as a “mailing list,” a “chat room,” or some other locust of attention.

(I think it’s neat that you could peg anything with a URL: You could, as people use a SharedWebBrowser?, invoke the peg for the URL you are looking at. That way, people could see who was having a conversation around a particular page on the Internet. And if you use the machine-layer idea, you can have a whole web-browsing presence thing, where you can see who’s at the page with you. I believe Jabber has a working implementation of this basic idea, but not under the unifying model of OverHear.)

I experiment with Jabber now. It does not yet work very smoothly, but it is clearly evolving. There are many things in the JEPs that when implemented can mean many new features - perhaps much of what is brainstormed here. One feature I’ve just discovered - in Gaim you can make a chatroom your buddy, currently there seems to be not much of semantics for that, but it seems very close to what you describe here. By the way I’ve created ‘wiki’ room at conference.jabber.org.

I’ve been thinking about, “Just how do you implement this?”

First, it seems like a basic model of “News, History, and Status” come up a lot.

  • News: When something was said, when somebody entered or left a conversation, when something happened.
  • History (or “Context” or “Logging”:) What’s been happening, so people can get caught up on events.
  • Status: How things stand right now.

You could use JEP60 or IRC to communicating events as they happened.

For the history, I guess there’s threads ML. Does it support not just words, but also people entering and exiting? I don’t know. Alternatives?

For status data, I was thinking some way of arranging an RDF graph. This would most likely be the “center” description of the conversation space- it would point to the history records, and it would point to where to get live News from. (It would likely also point to more FOAF information about conversation participants, derivative conversations, parent conversation, whatever.)

I’ve been working out some visual diagrams of the concepts here. But, I’m still diving into the ideas, and my thoughts keep changing. So, I guess it’ll be a bit longer before I post the diagrams.

ZbigniewLukasiak talked with me in IRC about this, and he pointed out that this is really about being able to traverse a graph of conversations.

We can network…

  • Conversations (what is described here)
  • Group Affiliations (like in LiveJournal, or Tribes, or whatever)
  • IM Buddy Lists (like nothing, yet, except perhaps sharing blogrolls)

It’s not really traversing conversations so much. Really, to traverse cpnversations would be something like- if a conversation split into two (say 5 people go to talk about A, and 3 leave to talk about B,) and then one of those split into two (the 5 split into 2 and 3), then and you could hop between the fragments- that’d be a type of traversing conversations.

Not really what I mean.

I mean more like: You overhear your friends talking, and you hear the conversations that they take part in. Then you discover other people, and you can then listen in on the conversations that they are having (or have recently had.) And so on.

This isn’t really why I care about OverHear, but: By the time you’ve implemented the infrastructure for OverHear, this is what you end up having the ability to do. (Whether you asked for it or not, really.) The goal was to overhear friends. But you end up building a graph, and graphs are traversable.

Zbigniew and I also talked about the difference between Conversations, Buddy Lists, and Group Affiliations. A group can have 50,000,000 people, but you don’t want to hear them all talking at once. A Conversation can have just 3 people, and 2 could be total strangers to you, and you may never meet these people again. A buddy list is not necessarily completely interconnected, like groups tend to be. And you might talk with many more people than are in your buddy list.

A list of conversations is a list of stuff that’s LIVE, or relatively recent.

re: ThreadsML and presence history: ThreadsML is just the beginnings of a discussion about a possible future standard; i.e. it’s wide open. So I bet we can add presence info.

I hear you say that “pegs” are different from “rooms”. How exactly are pegs different from rooms ? Maybe we need more terminology ?

If I listen to just one person talking – any number of people can listen to one particular person talking. That’s fine if he’s making a speech or reading an essay. But listening to one side of a telephone conversation – it’s a lot more difficult, perhaps impossible, to figure out what’s going on. Most blogs (and most web sites in general) feel like they are one side of a conversation.

So, we have the words one person says in one location (say, on a particular IRC channel, or on a Wiki page, or on 3-way telephone conference call). Then we have the words other people say in the same location. Together, all the words various people say in a particular location are a conversation.

I don’t know that you want just 1 field associated with each person. Most people have several interests, and even if I find 2 or 3 interests in common with a person, I don’t think I want to hear every word he says. Wiki seem especially good at providing several locations, dividing up someone’s sentences and posting each one to the appropriate location, and allowing me to skip over things I’m not interested in.

On the other hand, If I find one particular statement by one person fascinating, perhaps I will start listening to the other places he talks, that I previously thought I wasn’t interested in.

I agree that it’s difficult to carry on more than 2 or so real-time conversations. But the cool thing about Wiki (and Usenet before that) is that it’s possible to jump between dozens of different locations, get some context of what’s been said recently, and make an intelligent contribution in each location.

DavidCary

A single topical marker (“peg”) can hold multiple separate conversations, and each conversation might be marked with multiple topics. (see near the bottom of ConversationField, 2004-10-07 06:32, for more of my attempts to describe things)


Going back to the basic idea - the abillity to ovehear your friends conversing with other people. That’s quite clear with no additional terminology. Now we need to think about the interface to that functionality - we can have one big window with all conversations of all our friends just taking place or we could have an extended presence sign that would show that some one is just having a conversation and the conversation would be available on another screen. The second seems a bit more sane. So far so good - this additional presence could be a few words added by the IM client to the user presence message. Then we would need to choose the friend and one of possibly many of his conversations - I won’t elaborate on this step now. After that we need to join the conversation and that is something that is not compatible with current jabber technology as it means shifting the standard two person chat to a Multi User Conference. How can that be done?

I was imagining something like the present near-ubiquitous “totem pole” model. You know: A stack of names, from top to bottom, of who’s logged in.

And then I was thinking: As people say stuff, little word balloons extend from their mouth with their message, and then disappear after a few seconds of inactivity.

If you want to see the conversation itself, you click on one of the balloons, and it opens up the chat log. At that point, it looks a lot like a normal IM conversation. You appear as a “listener” of the conversation, at that point, listening at the door. If you want to participate, then you can click a “knock” button, which they then hear, and can choose to open the door for you or not. The could have an “open door policy,” where anyone can just walk in. They could have a “no knock” policy, where you can’t even knock.

Hows that?

The question is how do you do the trasition from normal two person chat to a Multi User Conference. You would need to start every chat as a MUC - just with two participants.

Yes, that’s the case.

Actually, not completely: Two participants, and N casual observers. The casual observers are the people who are overhearing.

Today is the one year birthday of this essay. And it still rocks!

page lead Lion to think it’s a way to impliment OverHear. Awesome!


See SharedAwarenessSystem

SharedAwarenessSystem is a bad name for what it is. We need a different name for the concept that we call “SharedAwarenessSystem.”

OverHear is actually not a SharedAwarenessSystem.

OverHear is neither a SharedAwarenessSystem, (such as, say, a Planet blog,) or an individual’s awareness system. (ie: Your mail client, or your personal aggregator.)

A SharedAwarenessSystem is a system of communication that people follow so that all members of a group are “on the same page together.”

Perhaps surprisingly, that is not what OverHear is: OverHear is a mechanism for individual subscription to overhear conversations.

But it does not, by itself, produce a SharedAwarenessSystem.

I added this as a possible thing to implement in VOS’s chat/messaging stuff: here's the interreality wiki page for ideas on chat, presence and messaging.

That page has a lot of random ideas for messaging and presence, both in capabilities, VOS-implementation, and UI. Please add stuff there if you want, and I can try to respond on how generally to implement it in VOS, or help you do it.

We need to add more chat/messaging/presence features, anyone want to come help with it?

A phrase that might charactarize this idea is “making conversation relationships real” or “making conversation relationships visible”.

This is a need I’ve found in using SecondLife and in thinking about 3D virtual worlds where people have conversations. In a spatial graphical environment, you can actually draw a visual representation of the conversation in the space along with the participants’ representations (avatars); this gives you a way to find conversational groups, and something to click on to join it. You can automatically arrange the positions of the avatars as well.

Lion, I am connected with some people who are developing some tools, based on open source software, and they might be interested in OverHear. When you have some time, it’d be great to chat with you about this over the phone.

Yes, I’d love to talk about it! 206.427.2545. I’m on a conference call until 12:00 PDT today.

I kept forgetting to plug this in here http://factoryjoe.com/blog/2007/08/25/groups-for-twitter-or-a-proposal-for-twitter-tag-channels/

Something like this might also work for OverHear (or give OverHear an edge over Twitter, maybe?)

Also, I am composing an email now to the person deveoping the software I discuss above, and I’ll connect you with them if they are interested. This project would not be looking to patent your idea, and would work with OpenStandards? and OpenSource software.

Would this group be the mysterious group asking “Wouldn’t you like to know what your friends have posted recently without worrying about where they posted it?”[1]

Note the section called “overheard” in get satisfaction. Such as

http://getsatisfaction.com/wikinet/overheard

Define external redirect: ChristopherDucamp IrcWhiteBoard SubPathetaEditLinks BubbleOfTalk SphereOfActivity DingDing TopicsPeopleGroups PubSub SharedWebBrowser ActivitySphere ConversationalField ActivityAwareness PublisSubscribe BubbleOfActivity EventSystem TalkBubble ConversationBubble OpenStandards ModPubSub ConversationSpace QuerySubscription PublishSubscribe OnlinePresenceForConversations

EditNearLinks: LiveJournal ThreadsML UserInterface MailingList TheHumaneInterface WikiNodes TrackBack OpenSource

Languages: