FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Egypt, Anniversary of Revolution of 23 July

WikiMimeComments

Comments about WikiMime

(re:WikiMime)

MarioSalzer: Two things - Depending on which MIME type you want to register exactly, you need to file an IETF document - only personal/vendor types can be registered that easily with the IANA. (I’ve recently wrote a page on InterWiki: about this, suddenly someone deleted it silenty? - do you mean IntComm:InternetSociety? ..? – LionKimbro - nop, meant another thing, somewhere else – MarioSalzer) As I see it you can at best register for “text/vnd.community.wiki” or something similar, when using the IANA registration form. One could however use the “text/x.wiki” freely and without registration (but such experimental MIME types are on the other hand only intended for medium sized internal use, not sufficient for Wiki at all).

If you try to introduce meta data for Wiki pages (which I personally find a nice addition) like the #!wiki pseudo shebang and magic number, then you are actually already trying to change WikiMarkup?. At least engine authors may be little less annoyed from your recommendation as from the typical please-adopt-a-standardized-markup (I’d favour).

Your #!wiki document type also appears to have only meaning for stored files, it is not really necessary for internet transmission of wiki pages. Therefore I’d also favour to register 50 (there’s probably not yet thousands of different wiki engine) single MIME types - each wiki engine its own:

Whatever, I’d suggest recommending the #!wiki syntax not for general transmission, but only for saving pages to disk - but then a file name extension should be standardized on likewise (”.txt.wiki” or so).

Mario, yes I’m aware of the requirement of actually writing up an RFC. I’m willing to go to that trouble. The RFC must contain the contents of the form, hence my attempt at communally filling it out. There’s two problems with the idea of registering 50 names. First, it’s a mess. IANA would not be pleased to see 50 or 100 or 150 variants on essentially the same MIME type, and the number would keep growing. It also requires a registration for each name. IANA would (I’d guess) want us to use a ‘variant’ or like-named parameter. Second, a name like “prs.cunningham.wiki” is not unique: it’s very likely that there would be name clashes among the wiki syntax names. That’s why I recommended a URI, where the author uses a URI they control or have authority to use (or they’re using an existing wiki language identified by the URI). The idea of having the wiki magic number is the same as the one for perl, i.e., it identifies a wiki page under any circumstance. So while there are many circumstances where it might not be necessary, storing, transmitting, processing, etc. it might be valuable. I could have suggested “text/plain+wiki” except that wiki text isn’t really plain text, especially since it can contain substantial amounts of markup. There’d be “text/wiki” for wiki text, and “application/wiki+xml” for the interwiki text, if the community actually created their own new markup language. But having been involved in a few of those, I’d strongly advocate just using either XHTML or a variant of XHTML since wiki text demonstrably is very similar semantically. --MurrayAltheim

I wouldn’t say #!wiki only has meaning for stored files. Given a suitable definition of the URI parameter (say, “if the Wiki file is POSTed to the URI, an XHTML translation must be returned”), it allows us to add - and exchange - new Wiki formats long after we’ve submitted the MIME type. Wiki engines that allow page transfers could automatically download new modules to support new formats natively.

Right now, we are looking to find the best way to add RDF to Wikis, necessarily changing the underlying format. Who knows what may be added in future? – ChrisPurcell

What’s missing from this discussion is an application model that motivates the usage of text/wiki in the way you suggest. It’s not obvious what the point of having a WikiTextMimeType, if the variants are too numerous to write a parser. That means users of the type will already be cogniscent of what types they will be retrieving (as they will be scooping directly from a known wiki engine), and thus text/plain would be sufficiently distinguishable, or any other random thing since the MIME type usage will be restricted between only the particular wiki and the particular application (although the X- convention is recommended for this).

What would be more useful for the URL is an online “variant -→ XHTML translator” that the application can slam the text against to get some renderable output. That would significantly reduce the number of variants, however, as they must provide such a translator. Further, those translators should be registered with an Apache-style license in a central authority that isn’t going to die any time soon. (I can nominate Meatball if necessary.)

Finally, I wouldn’t abuse the syntax parsers, but find a way to use the MIME format itself. Using something like text/plain; xhtml+xml=http://example.com/cgi-bin/online-translator might work (or not). – SunirShah

I think it’s important to note something that isn’t perhaps apparent, which is that there are two parts to the proposal: first, the registration of the MIME type; and second, the advocacy of the magic number or ”#!wiki” identifier within the wiki text files. The application model includes (but is not limited to) instances where one wants to take a stored version of a wiki text file and transform it into HTML or XHTML for online presentation, or as a component in some scheme where HTML or XHTML content plays a part. ChrisPurcell caught on to this by immediately suggesting that one could from the command line auto-generate a website composed of wiki text documents. Remember: the MIME type itself doesn’t exist except during transmission, so in situations where we’re not talking about MIME the value of the proposal is the magic number. A wiki text document sitting on a file system isn’t recognizable currently as a wiki text document, as there is no consistent magic number. My proposal addresses this.

I don’t believe the idea of a central authority will ever get off the ground if the wiki community can’t even agree on use of a six character string to identify wiki documents. And if there is to be a registration authority, it should be a real registration authority. Sunir, no offense, but your suggestion that Meatball “isn’t going to die anytime soon” really has no weight. It could die tomorrow, for all the outside world knows. I had a domain I let die because I moved on to something else. You could too. This kind of thing can’t rely on a single individual, or even an ephemeral group of people. I’m thinking of IANA as an authority. The W3C isn’t even an authority, as they might not be around in five or ten years (industry consortia don’t last forever, or at least maintain any viability forever, if you follow their history at all). But we don’t need an authority, and I don’t think it would ever happen that anyone could force compliance or acceptance of a registration authority. We use DNS because we must. The application model I’m suggesting doesn’t require buy-in from the entire wiki community, it merely suggests that applications could be written to accept more than one wiki syntax by recognizing the self-identified document syntax, that there’s a benefit to self-identifying. A processor doesn’t have to process all wiki syntaxes to be useful, but suggests that there may be many applications written to accept more than one, if they can be identified.

What I’m suggesting doesn’t require an authority, and despite my informal registry, I don’t expect to see a formal registry for all wiki languages. I think that is patently impossible, and coercive. What I’m suggesting is a very minimal way of (a) identifying wiki text documents as such, and (b) providing a way of identifying variants. This can be translated into a variant parameter during MIME transmission as a way for processors to accept or reject a document based on ability to process. People can choose to use the magic number or not. But if they do, their documents are then identifiable as wiki documents at minimum, and as according to a specific syntax if they (optionally) choose to add the URI. Then it becomes possible for processors to begin accepting those documents. That’s a win-win, and requires only the willingness to self-identify. The syntax change is about as minimal as is possible: six characters, and a quoted string and URI if they want.

I can think of quite a number of application models, but I’ll just mention two. My application Ceryle includes an embedded database that can store text, HTML, or XHTML content. It keeps track of each of these via the appropriate IMT/MIME identifier. I have a sniffer that assigns the MIME type of a document upon being stored. I can’t currently sniff wiki text documents, as there is not canonical identifier for them. I can’t store wiki text documents except as text/plain, because there is no MIME type. I can’t identify which wiki text variant a given document might be using because there is no means of doing so. Ceryle takes a set of wiki text documents and auto-creates a series of XHTML documents from them. It does this for any of these four MIME types (text,wiki,HTML,XHTML). For my purposes this enables creation of large composite documents from smaller nodes. It could also allow auto-generation of whole websites. A big version of Chris’ command line idea.

The second example is something we’ve been talking about on the BlueOxen tools-yak list, which is the idea of removing the walls between email, wikis, mailing list archives, and the like. So you and I could be having a conversation using wiki text (or at least taking advantage of it), with the list archives auto-generating PurpleNumbers, WikiWord links, and a host of other possibilities. For this to work we need both an embedded magic number in the wiki text files and a wiki MIME registration that can also identify which wiki language is being used. I envision a modular processor being able to handle a large number of them (not all of them, but it’d be extensible). I’m willing to donate an existing application that does this (or most of this - it still takes an agreement on what “this” is). The API would be to accept a SAX InputSource? and output a DOM document. Pretty simple. The application would attach the variant URI to the processor, and accept any incoming text files that it understood. This could be a flow-through processor, or be part of a larger application. (sorry this was so long-winded…) – MurrayAltheim

re: nominating Meatball. I would only do this if necessary. I really dislike the idea of placing Meatball at the centre of any central registry. I don’t like the TourBus bus numbers for this reason, though I did support owning the OpenDirectoryProjectWikiCategory, only because that was a step towards DevolvePower. LimitTemptation. – SunirShah

MarioSalzer: The ”#!wiki” shebang is a nice idea, but the current syntax proposal isn’t very Wiki like. First forget about the quotes, make the Wiki identifier a WikiWord (and define it to be explicietely case-sensitive or not). And second the URI should be an URL, and it should point to the according TextFormattingRules page for the specified Wiki implementation (or at least a static copy of that). I’m saying this, because I think Wiki text and XML are two very different things, and mixing some <!DOCTYPE syntax and the simple format of a #!shebang line is contra productive.

If you’re going to put a RFC out, then make it explicit, require the magic number/line. The MIME type “text/wiki” was fine, if you also made the reqirement that it “MUST always include the variant=… parameter” (holding the WikiWord from the magic #!line, position 2 - to close the circle). Also you such a RFC should reserve a few Wiki identifiers like “text/wiki; variant=standard” as unregisterable, for future use (who knows?!)

Also I like the idea of a central registry. The IANA is the registry of the Internet, and it would make a good place for registering Wiki markup types. I’ve asked about this, and appearantly it wouldn’t hurt IANA to have 150 different Wiki markups and MIME types registered, but I’d currently favour the variant= thingy. Also if you write a RFC, you can add a “IANA considerations” section, requesting IESG and IANA to set up another sub-registry for Wiki markups and magic number identifiers. Good thing. If you’d like I’d help with writing it, before I see more of the wiki2xml2wiki approaches for page interchanging aargh.

MurrayAltheim: I agree with most of your suggestions, Mario. I’m not sure if all wiki text language names are expressible as wiki words, hence the quotes. I was thinking that some people might want to (in human-readable terms) call their language things like “Bob’s Wiki Language Version 1.3.4”. If we were willing to force wiki text language names to be expressible as WikiWords, that’s fine. You guys know more about what is considered okay in wiki land than I do. If we went with WikiWords rather than string literals, I’d say yes to case-sensitivity on the name, and I wouldn’t even argue with you on URIs vs. URLs. I don’t have a strong feeling about requiring the URL (as I don’t know how much that requirement would hurt acceptance), but I don’t think it’s reasonable to require that it point to a language definition page. That page might not even exist. The URL is really only necessary as a unique identifier. It’s obviously in everyone’s best interest to have it point to a page, but from a machine processing perspective, the real reason is to be able to canonically identify the language used. I was trying to be the least forceful about the proposal as possible, but if you think being strict would fly, fine.

I do agree that we need to register the MIME type with IANA, but once the MIME type is registered, we honestly (IMO) won’t need to bother IANA with registration of all the variants – that’s why we use the URLs for the variant name. If we used something like “vnd.usemod.wiki” we would, but URLs provide the same level of disambiguation and identity as they do in RDF/OWL and XML Topic Maps. I am a bit wary of trying to push IANA to set up a sub-registry for all the variants, since that makes our proposal a lot more contingent on their willingness that I think is necessary, and my guess is that they wouldn’t. It also requires that anyone writing a wiki language needs to register with IANA, which I think would be a showstopper.

Now, if you’re willing to help write it, I’m all too happy to have that help (contact me via email: “m dot altheim at open.ac.uk”). We can all begin with filling out the form above. I’ll try to do a summary edit of the thing after it gets towards some level of completion, because there needs to be an editing lead on this. I just want to do what is necessary to get the ball rolling consistently towards the goal, which for me is getting the ”#!wiki” magic number and an appropriate way of labeling variants established with IANA, for both embedded within the file and for transmission via MIME.

MarioSalzer: If the I-D / RFC was primarily about advocating the (not yet esablished) wiki text identifier (magic no), then IESG will likely reject it. Therefore the #!wiki magic cannot be the central point of the draft, like the IANA registration section can only be one part of it. I’d say such a draft should mention the ”#!wiki” only as recommendation (but then with an enforced and reliable format), and then simply reference it in the MIME type registration paragraph.

Because RFCs shall not contain (possibly volatile) URLs to outside documents and services, the registration of variant= types must either be handed over to the IANA, or there cannot be a registration at all. As however the MIME types variant=”…” shouldn’t (cannot?) hold a URL, it all may get inconsistent without a central registry.

   I'd prefer identifying variants only by WikiWareName, rather than URL/URI.
   So I also question if a URL must be part of the #!wiki magic line. If it
   wasn't, then the magic line would just be "#!wiki WikiWareName\r\n", what
   would make it however look less professional/necessary and hinder
   adoption.

A general problem with the #!wiki as I still see it is, that it introduces invisible markup into Wikis - because currently all (most) text entered and appearing in a Wiki edit box, will later be displayed in the rendered page. People won’t adopt rendering engines to filter out the first line on occassion, therefore this will happen at transmission time (when a page is copied from one Wiki to another). The I-D must take that into account, and make clear recommendations. For my Wiki I would:

  • send out the ”#!wiki” line AND the correct HTTP header, if a page was requested in raw form
  • filter it out again, when importing pages
    • throw away pages, where I didn’t know about the variant type, or couldn’t convert it into the internally used markup
  • I possibly don’t need a general #!wiki magic code internally, as files are identifiable as such by their location (running database or eventual page-import/, page-backup/ directories)
    • and I personally value MIME types higher than any other identification charactistics (file name extensions or magic bytes)
    • on the other hand I enjoy file headers ;) (That’s here again the most important question: If it was a standard - how would YOU implement and use it?)

For the without-quotes question: If someone really wanted to use spaces (at most 10 out of 100 cases?), then the underscore made a good replacement for it - that’s actually what this character is for. Also I would say, the actual MIME type variant= identifiers should only contain [_\w\d]+ chars.

(This is not a rant against the idea, I still find it a great one - but I’d just like to list possible problems I see.)

:My difficulty with putting any emphasis or importance on what you’re calling the WikiWareName? is that it creates a likely land-grab for wiki names. That’s never good for a community. If suddenly one can register their project/product name with IANA, people will do it. So suddenly, you find some large vendor grabbing a bunch (maybe all) of the good names. Or what about two or more legitimate projects with overlapping names? Who arbitrates? With URLs, none of this is necessary. I’d almost rather drop the human-readable part if that’s going to be a problem. People could look at the URL to determine the syntax, it’s just not as pretty. I’m not sure what you have against quotes, since they are the most common way of delimiting textual content, and underscores are often part of names, so you’d then have to allow for escaping them. If this is going to get that complicated, I’d just recommend dropping the whole thing. (I thought quotes would be non-controversial.)

:I don’t think you can assume application behaviour, such as the auto-filtering out of important information like line 1. People who wrote processors that deliberately removed that information would be essentially saying: we don’t care what syntax you used, we don’t care what encoding you used. This would very likely destroy the ability of the document to be further processed (or its processing requirements even identified). And yes, it would require that applications refuse wiki documents for non-understood syntaxes; that’s normal and appropriate. Word processors don’t accept things they don’t understand either. There’s a tacit agreement implied in use of the ”#!wiki” magic number, i.e., that a given language identified by it includes it in its syntax. Given that currently there is no wiki identifier, nor a MIME type, this would not alter current tools or documents. It would suggest to those existing tools and documents to be altered in order to be identified. Keeping that alteration minimal is obviously important.

:Remember that absent that first line, the wiki text syntax is unknowable to a processor. I think relying on location (i.e., which directory or source a file comes from) is very fragile, and certainly wouldn’t work for transported files from other systems. Also remember that MIME identification doesn’t exist once a file is sitting on a file system, so it has to either be an internal magic number and/or a file extension. File extensions have been proven to be (actually) pretty unreliable, e.g., what does ”*.doc” denote? (Don’t answer MS Word). We can suggest a file extension (which must use only one period/full stop and three letters, since there are systems that only allow that).

In terms of encoding, I don’t think we have any choice: if wikis are an international technology, they must support localized text using whatever character encoding is correct for that localization. I’ve amended the text of the form to that effect. The easy way to deal with this is use the most universal encoding (UTF-8, which includes US ASCII as its lower 127 character positions) and say that absent a declaration this is assumed. This is what XML does, and honestly, we can’t do better than XML on this count. But we must be able to support Asian and other non-ASCII encodings. (This is part of the pain of growing up into an international technology.) – MurrayAltheim

I’m not involving myself in this discussion.

However, I want to mention:

There's an InterWiki wiki, and it has an accompanying InterWiki:InterWikiMailingList. MarioSalzer is on there, as are a few others. The wiki interchange format has been discussed there lately as well.

Hi Lion. Yes, I think they are related but really two different proposals. I am looking at the WikiMime proposal as one for wiki text only, hence it being “text/wiki”. An InterWiki format would more likely be in IANA’s application space, such as “application/iwiki” or “application/iwiki+xml” or something like that, the latter esp. if there’s going to be any XML or RDF content in it. I’ve checked out some of the InterWiki content as you suggested. I’m a bit curious as to why you are not involving yourself in this discussion. Is this subject not of interest to you, or is this not an interesting place to discuss this? Or something else?

MarioSalzer: We didn’t had to discuss about a #!wiki magic, if it wasn’t that nowadays computers have no support for storing meta data (MIME types) in their filesystems (I’ve heard SGIs XFS can do?). File name extensions are of course no replacement for either.

For the file name extension I’d still recommend the talkative ”.txt.wiki”, because many systems would use text/plain as fallback, but at least in Web server environments the ”.wiki” would take precedence. I also don’t see a problem with such double extensions - Internet connected older systems usually know about their limitations and will compact filenames and extensions (as Win4.x does with its internal 8.3 and its UCS16-encoded visual filenames).

Instead of “WikiWareName?”, I should have better said “WikiMarkupName?”. But you’re probably right with the name grabbing - this may happen, and then makes a stronger argument for a central registry. The URLs are of course more unique and provide for better machine useability (except that URLs are partially case-insensitve). The WikiMarkupNames? however are more user-friendly (allow the non-geek to exactly identify the markup type), and would likely be preferred in dealing with markup types (at least I would). Also don’t forget about the MIME type here - the variant=”…” thing is already a very strange exception in the IETF MIME land, but introducing an URL/URI there: too strange for getting RFCified I guess.

Also, as that WikiMarkupName? was only an identifier, it didn’t really hurt if someone had to register for “CommonWikiName?2” or ”…3” if a name was already assigned. I’d even doubt that, as wiki engine programmers most always come up with choosen coolish and unique names for their babys […]

Hence, I’d say the URL didn’t need to be part of the #!wiki magic line, but it was a necessity for registration. Either the URL or the WikiMarkupName? are redundant IMO. But on the other hand, the URL would provide for that bit of extra information about the used markup, even if it is currently not useful for automated evaluation (think, it could pointed to a script for use with http://atox.sf.net/, for instant markup conversion).

Btw, I still believe it was far easier to write up a minimal MeatBall:WikiMarkupStandard, and then let Wikis provide a small set of regex rules to convert from and to this - obviously faster than a complete XML/XHTML roundtrip for converting most always very similar Wiki markups into each other.

The UTF-8 encoding is the best choice of course. And we can do much better than XML by disallowing the IMO super redundant UTF-16LE, UTF-16BE and UTF-7 encodings. --

MurrayAltheim: I think the problem of name collisions is much more serious from both a programmatic and a legal/intellectual property/trademark standpoint, and can’t be left to forcing people to putting a 2 or a 3 after their name. First of all, from a trademark POV, that won’t help. You can’t create a product and call it Microsoft3. While some people in the wiki community may be hobbiests, whatever becomes a wiki standard is going to have to be more rigorous. I’ve done standards work for over a decade, and in my experience you can’t treat these things as a hobby (I’m not suggesting you are, rather that we have to be very clear about naming and identification). There are also some serious security issues in overriding names. I agree about not passing URLs as variant parameters in the MIME header, but I’ve not actually pushed that idea much. We don’t really need to. I’m predomominantly interested in the self-identification of files from within (on that first line). It’s at that point that uniqueness is important, not at the MIME header point. We could take a cue from XML Namespaces and perform exactly the same thing as XML, i.e., use a local name that is considered as a proxy for the URL, e.g.,

    #!wiki Smiling&#87;iki http://www.smilingwiki.com/wiki/1.0/

would mean that anytime someone referred to “SmilingWiki”, they’re really talking about the URL. That’s how things work in XML Namespaces.

As for character encoding, you can’t exclude any specific encodings unless you’re willing to say that entire countries can’t conform to a wiki standard. They’re not redundant at all.

The optional parameters section describes only the parameters themselves, and wouldn’t go into a great deal of detail, instead referring to a larger section on the subject.

MarioSalzer: Trademark conflicts won’t escalate at MIME/magic registration time, legal problems arise at that point of time a name was choosen for a wiki engine. And its probably a lower risk to call an engine MicrosoftWiki, than to register for httpz://www.microsoftwiki.com/wiki/1.0/. You cannot work out such problems in a single RFC, especially since we all all know, the DNS has always been vulnerable to virus and lawyer attacks. ;)

And there is no problem with file name extensions here. Just think of ”.tar.gz” and ”.xcf.bz2”, they work quite reliable. Also the 3 character limit is stupid, and everybody who gives JPEG files a ”.jpg” or ”.jpe” extension or HTML files a ”.htm” suffix is simply wrong (see also ”.xhtml” vs ”.xht” discussions). There is a lot of important file name extensions, which are (should be) longer than 3 chars: .vrml, .tiff, .mpeg, .java, .class and .shtml - it is simply lazyness (of Win/Mac users), that we nowadays still see the falsely shortened variants of some of these.

I also find the comparision with XML namespaces misplaced here, as Wiki and XML are really opposite things (if we for a moment leave out the WikiInterchangeFormat discussions on the InterWikiMailingList?). The URI/URL makes no welcome replacement for an identifier, because there is too much weird characters involved (or at least allowed) - so it may not be useable at certain points in a software, were a unqiue word character string would succeed. The MIME types on the other hand have a very simple and well-known structure, and many software is better prepared for using them instead of an URI identifier.

If you now found out to concentrate your efforts on the #!wiki magic line, I’d say that’s the way to go. But then you shouldn’t (cannot) write a InternetDraft? (RFC) for it. It was the possible relationship between the MIME types and #!wiki magic that made your idea so interesting (to me at least). But it somehow seems to depend on unique identifiers with only associated concrete URLs.

Mario, I think you’re missing my point about XML. I wasn’t suggesting we use XML, I was suggesting the idea of a semantic equality between the WikiWord and the URL, i.e., that while within Wiki text documents and in places where URLs are not considered appropriate, the WikiWord would be used, but it always is a proxy for the URL. Regardless of whether wikis and XML are opposites, wikis and URLs are pretty tightly bound. Why not take advantage of that? Why reinvent some new methodology, one that will require its own infrastructure?

And I think we’ll have to agree to disagree about intellectual property. I won’t tout my credentials in the area. But think of it this way: if you go to the trouble and expense of developing a brand, a logo, writing documentation, etc. using a name, and that can be hijacked by anyone, that’s a non-starter for a lot of people. We can’t solve IPR issues in general, but we must provide a way to avoid name collisions. The idea with the URL is that while you can get sued for using MicrosoftWiki (and you’d lose, and you’d certainly lose if you used their domain), you can identify things using your own domain and be pretty much guaranteed of winning. And I hate to say, but people’s entitlement in the use of names is a big issue (see Get Out of My Namespace – James Gleick, New York Times Magazine [1]). Lion is trying to solve a similar problem in DisambiguatedNames?. The W3C has been trying to solve this problem for years. The Knowledge Representation community has published years of papers on the subject of identity. It’s a tough nut to crack. The brain-dead simple way of solving this is use a name within a namespace you control, e.g., URLs. We need a reliably-namespaced String, and currently the way to do this on the web is via DNS/URLs. That’s why it was built into XML Namespaces, RDF, etc. Now, if Lion can solve this problem, he should probably get a Nobel prize or something. I’m of the mind that when I see somebody else has solved the problem I prefer not to reinvent. URLs work. If we have five JoesWiki languages out there, that’s not going to work.

MarioSalzer: So, then what’s the conclusion? - The WikiMarkupName? is the medium reliable identifier and the URL the exact pointer. Could we now negotiate on following points:

  • registration, at best centralized, and super optimal: IANA does it
  • a WikiMarkupName? is choosen at registration time, and associated with a unique URL
    • first come, first server registration (as is with all MIME types)
    • registration can be updated, and especially URLs/URIs can be updated to reflect a derived (but still compatible) markup (that’s what we had to work out):
  • the WikiMarkupName? is used for the MIME type text/wiki variant= parameter, hence it MUST be almost unique
    • who comes second must register for WikiMarkupName?2 or …3

IANA and IETF clearly say, that MIME types aren’t for advertising a vendors products. And if even the MIME type isn’t, then a subparameter like variant= shouldn’t either. So there is no problem with that, the IANA already has rules for IP and trademarks - let’s review that and copy it verbatim into an upcoming InternetDraft?.

Since the relationship between the #!wiki magic line and MIME type is the WikiMarkupName?, the URL/URI is somehow redundant at a first glance. But on the other hand it can change (be updated) to reflect a different version number or to differentiate two only slightly distinct markups - if we allowed for the idea of MarkupFamilies?.

So what, an IANA subregistry or not? The markup identifier less reliable than the URL, or leaving out one of them if they were semantically equal and therefore one of them redundant?

Mario, I think we’re getting much closer. As you note, the relationship between the magic line and the MIME is the WikiMarkupName?. The question to me is what gets registered with IANA, if anything (I’m not entirely convinced either way). Some people don’t realize this, but in XML Namespaces, the namespace prefix (e.g., “xyz” of ”<xyx:html>”) is completely insignificant, and local only to the specific document it’s used with.

Important There’s one thing that I wanted to hilight, and that is that one of the specific ideas of this MIME registration is that it’s deliberately a sort of “meta” registration, i.e., not a registration of a specific wiki syntax, but a registration for all wiki syntaxes. Registration has its benefits, but one of the biggest problems with it is that it’s a lot of trouble, and intended for things that last a long time. A lot of technology projects don’t last a long time, nor are they intended to. Concepts like plain text, HTML and PDF are meant to last a (relatively) long time. But even web sites aren’t really. Some might, but there are many things that are deliberately short-term. How many people really want to still be managing the same wiki site ten years from now? IANA registration is for the long haul. This is why I’ve advocated concentrating on the use of URLs as the identifiers, since they (a) don’t require registration in order to function, and (b) last as long as we want them to.

‘nuff said for now… gotta run.

Murray, that’s a good reason. Now I can live with it. ;)
No IANA, less hassle. - But how would it read in the RFC? - You’re aware that you cannot name an URL in the RFC telling where the initial #!wiki registration takes place.

Btw, for online editing of InternetDrafts?, it proved useful to use a <pre>…</pre> page (TooMuchMarkup? disturbs editing).

Mario, I’m not quite clear what you mean by the “initial” registration? I don’t understand that or “where”, i.e., the MIME type RFC is the “where”, AFAIK. Thanks for the note on <pre>. I agree.

MarioSalzer: I’ve meant, the RFC cannot say “the #!wiki magic registry is located at http://www.emacswiki.org/cgi-bin/community/WikiSyntaxRegistry …” - that’s not allowed (or at least not welcome @ IETF).


MurrayAltheim: I’m finding it rather difficult trying to both manage a threaded conversation and a co-edit a specification in a wiki, and I’m not quite sure when it is appropriate to move or edit other people’s content. While this page is intended as a shared activity, I’ve never known editing to be easy under any circumstances, much less these. So I’m not sure how to proceed at this point. I can’t even seem to remove things that I’ve said previously without leaving other people’s responses hanging, and don’t feel comfortable removing their content, even when it is comments in the middle of what is ostensibly a form being filled out. So this document just gets longer and thicker and harder to read. Is there some normal way of doing this? And it’s taking an enormous amount of time compared to email and a listserver. Help! I don’t think this is going to work.

Well, let’s see.

I’m not following this too carefully, and I’m not sure what your goals are. But it sounds like you’re suffering some DocumentationParalysis?.

In which case, the thing to do is to delete everything, and then just write, from scratch, the things that seem really important to you, in a BulletSummaryBlock format.

If you missed stuff, other people will tell you, and you can fill them back in.

But, that’s just how I think of stuff. I may not understand your problem at all.

In response to, “Why aren’t you involved in this discussion?”

This sort of thing just isn’t terribly interesting to me, right now. I’m working on IntComm:LocalNameServer?.

Fair enough. I read over that stuff and couldn’t figure out how you were going to make it work, but it’s not my project. It’s related to what Mario and I are discussing here, i.e., DisambiguatedNames?. – MurrayAltheim

John Sechrest on the tools-yak list recently responded regarding the choice of the #!wiki magic number, suggesting that in using #! we’re overloading the concept if we don’t use it correctly (see 144). I kinda like the simplicity of the existing plan, and I prefer it aesthetically, but I see his point. OTOH, we can just say “screw it” – nobody owns #!, and most people wouldn’t even realize there was something “wrong.” So, what would people think about changing the magic number to #!/usr/bin/wiki or #!/usr/local/bin/wiki? The shorter the better, of course, but this would suggest that (at least on unix/linux systems) a wiki application could then sit at /usr/bin/wiki to receive wiki document requests for processing. ChrisPurcell liked this idea enough to actually put one there, which I assume means that he's either put an executable at /wiki or aliased “wiki” to some other location. We don’t use #!/wiki, so interpreting it as the latter is really the way I think of it, hence I lean away from #!/usr/bin/wiki, just keeping it as #!wiki. This proposed change would also suggest that the plan would be altered to allow content following the first two parameters so that one could send directives along with the syntax name and URL identifier, e.g.,

    #!/usr/bin/wiki Ceryle&#87;iki http://purl.org/ceryle/wyki/1.0/ -x -y -z

where -x, -y, -z are optional and intended for the processor at /usr/bin/wiki. They would be left unspecified in the MIME application. This makes the line a bit longer, but as John says, more in keeping with the convention on #! and also more functional, should anyone want to put some wiki software at that path location. We couldn’t allow the path to be altered or we’d lose the point of having the magic number as the canonical identifier of a wiki text document. I’m sitting on the fence on this one right now. – MurrayAltheim

I just used #!wiki, and put the wiki processor in the same directory as the wiki pages. This works fine. The problem with using something like /usr/bin/wiki is that this directory won’t be user-modifiable on many servers, may not exist on some flavours of UN*X, and may become obsolete in 10 years time. OTOH, #!wiki will always work.

Of course, if we have a mime type, we can ignore any path that follows the #! when actually reading a text/wiki MIME, and pick the right one for our circumstances when using UN*X magic-number behaviour. This indeterminacy would then have to be part of the standard! Heh.

Incidentally, I’d suggest making the canonical URI the first argument, not the second.

Hey Chris. I agree with your portability arguments on #!wiki vs. #!usr/bin/wiki. Any particular reason for the parameter order switch? From a machine processing/parsing perspective, I can’t see it makes much difference. The way I was looking at it was that from a human perspective, people would read left-to-right seeing #!wiki, then the human-readable identifier, then some big, long, ugly URL, which might extend (in theory) past the readable limits of whatever they’re viewing the document in. Now, if either were optional, I’d certainly agree, but I think they both need to be required so that the identity relation between the MIME header and the canonical URL as found in the document can be maintained. But I’m certainly willing to hear why you suggest the change.

Just my bias :) The URI is the canonical bit, while the nick-name is just for the reader’s convenience. Writing that, though, convinced me I’m wrong. Heh.

Incidentally, you want to be typing either new:: or new:MurrayAltheim at the start of your comments. I suspect you’re using new:MurrayAltheim:; anyway, the UTC is not being appended.

Wiki Translation

Now, I admit my particular interest in wiki is a bit off the beaten track, in that I’m as much interested in the approach to authoring content via wiki text as I am in wiki themselves (not to minimize either, just point out I have a strong interest in authoring, it being the primary subject of my Ph.D.). One of the reasons for the WikiMime proposal is the idea that there is not OneTrueSyntax?, that everyone wants to use what suits them best for technical, aesthetic or other reasons.

So a great part of my needs are met by a wiki rendering engine, absent the wiki Web site features. I’d been using a substantially-plaintext markup and engine I developed for this purpose called APT, which I’ve modified to be more wiki-like, but recently dumped in favour of using the Radeox rendering engine, which is in Java, same language as my project Ceryle (review available) No sense in reinventing the wheel here. It also has a fairly powerful macro facility. One of the things that comes to mind with Radeox is that it doesn’t use a hardwired syntax, it uses locale- and syntax-alterable definitions for both input and output. These are in a number of files having a syntax like the following, pairs of match and print definitions:

    filter.bold.match=(^|>|[\\p{Punct}\\p{Space}]+)__(.*?)__([\\p{Punct}\\p{Space}]+|<|$)
    filter.bold.print=$1<b class=\"bold\">$2</b>$3

Does this light up anyone’s head the way this did mine? We could do the following kinds of translations by merely developing the syntax files for each:

  • Oddmuse-to-XHTML translation (if somebody hasn’t done this already)
  • Oddmuse-to-Meatball wiki text translation (in and out defs both wiki text)
  • Meatball-to-Oddmuse wiki text translation (ditto)
  • Oddmuse-to-Radeox wiki text translation (ditto)
  • Oddmuse-to-Tex translation (!)

This would enable wiki source pages to be transferred between sites using different syntaxes (or say, if one were changing engines), with the last one particularly interesting for those writing academic papers. Wiki-to-Tex! One could write the academic paper in wiki text, translate it to XHTML, then import it into a word processor for final formatting, or directly from XHTML-to-PDF with one of the available tools. I’m wondering if anyone has started to keep track of these Radeox syntax files, as they’d be valuable tools.

The means for this to work automatically would of course be a way to identify the specific wiki syntax of a file by the shebang and syntax identifiers. (This is a pretty powerful use case, IMO). – MurrayAltheim

Hm, it looks like just another notation for a search-and-replace based parser to me. In other words, it would be just as easy to write in Perl or any other reasonable programming language. The main difficulties remain:

  1. It is still a substantial amount of time per engine in trying to extract the 20-50 rules (on average) that such an engine uses, and rewriting them in the notation suggested above.
  2. We need volunteers that read the original parser source and do the work, for each one of them.
  3. Doing the work for parsers that are not search-and-replace based is very tricky. Basically people like me dumped a search-and-replace based parser because it wasn’t good enough, therefore the person writing the stuff in the notation suggested above would have to solve the problem I was unable to solve. I’m not saying that it is impossible; it just wasn’t easy enough for me.

Well, you see difficulty and I see possibility; I don’t know if this is a glass is half full, half empty situation or not. What I’m looking to do is not solve the world’s (i.e. the entire wiki world’s) problems. What I see with Radeox is a generalization of the problem into a framework, and a means of generalizing the process as well. It would allow people to write (or simply modify) sets of syntax rules rather than write processors. Certainly, it’s not a 100% solution. But it might be an 80/20. Even if it’s a 60% solution, it’s (to me) a good start. What I see is a syntax rule set coupled with an identifier for the syntax, such that an application could sniff the first line, determine which syntax, then switch to the requested syntax. In Radeox, the latter is done in one line of code. For parsers that aren’t search-and-replace, the expression of syntax rules could still be used for the majority of the conversion, leaving perhaps the more complicated stuff to macros (the way Radeox does it, AFAIK). I’m enthusiastic about this because it represents doing something towards solving the bigger problem, and I don’t see any downsides. I don’t expect it will work for everyone. I don’t expect anything to work for everyone. – MurrayAltheim

Related Links

See also: WikiMime, WikiMimeComments, WikiTextMimeType, WikiMarkupStandard, WikiInterchangeFormat, WikiPageInterchange, InterWiki, XhtmlInterWikiMarkupStandard and TheInterWikiMarkupStandardShouldBeXhtml

Define external redirect: LocalNameServer InternetDraft WikiMarkupName WikiMarkupNames WikiWareName WikiMarkup InternetSociety InputSource InterWikiMailingList DisambiguatedNames InternetDrafts DocumentationParalysis MarkupFamilies OneTrueSyntax TooMuchMarkup ExampleWiki CommonWikiName

EditNearLinks: TourBus SunirShah WikiWord LimitTemptation WikiTextMimeType OpenDirectoryProjectWikiCategory DevolvePower

Languages: