Perl XTM: It's dead, Jim!

Perl XTM, the most ancient Perl implementation of Topic Maps has now retired. Or should I say: has been forcefully retired by me as I deleted it from CPAN.

I had started it 2001 when Topic Maps were young and promising, and of course I followed the only thing which was known at that time: the XTM format. Needless to say that this was not very effective. And needless to say that following the meandering standard had little entertainment value.

2005 I completely revamped that into the now current implementation and published this as Perl TM onto CPAN. If - for some weird and unlikely reason - you need access to pleistocenic software, you need to go the CPAN Backpan.

If you still have links pointing to Perl XTM, then either (a) worry about my new shipment of Voodoo dolls, or (b) consider updating the link. Of course, I leave that completely up to you...

Posted In

"Where do we go from here / which is the way that's clear"

Pardon my quote, it's from an old 80's song.

I see that the XTM specification hasn't been updated in a few years. I've thought it was a valid specification, in the few lesser years when I've taken a look at it, myself. I notice that the specification does seem to have some difficulties, about it, but I think I can understand that, given the essentially abstract nature of the subject matter - and I've thought that 1.0 might not be the final version of the XTM spec.

Tonight, I was going to try posting a couple questions about some of the diagrams in Annex B of [XTM 1.0], to post those to a mailing list about XTM, but ... now that I take a look at what I can see of the situation, I wonder, is XTM 1.0 still regarded as an active specification? has it been superseded? (is TMRM official?) or has XTM been temporarily put on the back burner of the industry, may we say?

Judging by my first and subsequent impressions: I think XTM is neat; I think it may support a more succinct body of reference documentaiton than Wikipedia can honestly hope to, by the latter's much more discursive model. I thought that with a convenient interface supporting "linkage" with discursive texts, XTM could be very useful. In fact, I was planning on supporting XTM with a data storage project I've started developing - in Java, this time, as experience would hold that I sure won't be able to make it work, in Common Lisp. That, then, brings me to the questions I came up with, previously tonight.

If XTM itself is no longer an active standard, then do we have to buy documentation from the ISO, in order to implement topic maps in XML?

I'm honestly lost in the soup, about this. Did someone call "abandon ship" on XTM?

Sean (not verified) | Tue, 08/11/2009 - 03:56

Re: "Where do we go from here ..."

Hi Sean,

First off, you misinterpreted maybe the blog entry. It is about the abandoning of the ancient Perl implementation (there is a newer one), not that of XTM.

I see that the XTM specification hasn't been updated in a few years ... - and I've thought that 1.0 might not be the final version of the XTM spec.

And it wasn't. Which leads me to assume that you are looking in the wrong place.

The most important standards part is not XTM, it is TMDM. That is pretty robust, and when it changes then it will be a larger change some time from now.

.... I wonder, is XTM 1.0 still regarded as an active specification? has it been superseded?

Yes. There you also will find a much leaner XTM 2.0. And there are other interesting alternatives, such as TM/XML.

Is TMRM official?

No, not yet.

.... or has XTM been temporarily put on the back burner of the industry, may we say?

I would say the industry concentrated on software around the model (TMDM) and treated XTM as what it is: just one possible serialisation. It is not more, really.

Judging by my first and subsequent impressions: I think XTM is neat;

Which makes it official: I do not like you :-)

No, seriously, I cannot understand this obsession with text formats, be they XML or not. It is all about the formal model behind it. And that ...

I think it may support a more succinct body of reference documentaiton than Wikipedia can honestly hope to, by the latter's much more discursive model.

... exactly right, captures the essence.

If XTM itself is no longer an active standard, then do we have to buy documentation from the ISO, in order to implement topic maps in XML?

You *can* sponsor them. *I* just click on the link :-)

Did someone call "abandon ship" on XTM?

No, you were lost on some strange island. ;-) Welcome aboard! Care for a drink?

rho | Tue, 08/11/2009 - 07:56

Obsession with text formats

Speaking for myself, I think my personal interest in text formats (I don't THINK it's an obsession, but of course you never do, do you?) reflects that fact that in my experience, for most text formats (at least the ones I regard as well defined), it's clear when a text stream is, and when it is not, an instance of the format defined. And in contrast most definitions of "models" make it difficult to tell the difference between a conforming implementation, serialization, or representation of the model and the coffee cup on my desk.
(Quick: prove to me that my coffee cup does, or does not, conform to the Dublin Core model of metadata! Eve Maler has already proved that, like my refrigerator, it does conform to the XML Info Set spec.)

There are surely exceptions on both sides.

Formal models are important, of course. But so are serialization forms. It's easy to exchange XML documents, and have XML-capable software read the data at the other end. It's somewhat harder to exchange data without any agreement on serialization forms. As a user, I do want the semantics of the data to survive a transfer from one software package to another; formal models are useful for that, when people can agree on them. But for the transfer to work, the package I start from and the package I go to must agree on a serialization form. A humble role, perhaps, but an essential one.

C. M. Sperberg-McQueen (not verified) | Thu, 09/03/2009 - 04:20

Re: Obsession with text formats

Speaking for myself, I think my personal interest in text formats ...

Well. Well. Well. :-)

Quoting Catbert, my most evil p\rho{}tagonist:

  • Behind every text format, there is a model. Just like behind a successful researcher there is an evil cat. Or behind every broke man an ex-wife.

I think what Catbert means is, that if one wants to serialize a Topic Map into an XTM stream (i.e. serialized text), then implicitly one transforms one topic map model instance into an instance of the XML model (i.e. infoset). Only in a second step the latter instance is bead into funny symbols, embedded into even more funnier <> symbols.

Catbert continues:

  • So the text fetishist should never forget that they are slaves of models, just like the rest of us!

At least that is what Catbert always says.

Formal models are important, of course. But so are serialization forms. It's easy to exchange XML documents, and have XML-capable software read the data at the other end.

I personally have a problem that serialization is thought so .... linear. If you look at the new generation of infographics, then a linear arrangement of the information is not what we want or need.

It's somewhat harder to exchange data without any agreement on serialization forms. As a user, I do want the semantics of the data to survive a transfer from one software package to another;

Catbert says, that XML has almost no inherent semantics. So effectively nothing can be learned on the receiver end about semantics. Even RDF(S) (or Topic Maps) have very little intrinsic semantics (transitive subtyping perhaps).

So getting XML over the wire may not so informative after all; unless you know what the content is all about. And JSON or YAML would do equally well, but without all the fluffy fluff.

That's what Catbert says.

rho | Thu, 09/03/2009 - 19:32