FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Peru, Independence Day

OntologyRepresentation

What

OntologyRepresentations are ways of representing the relationships between various concepts. For example, "Cats are a type of mammal," or "An arm is a part of the body," or "If A is the father of B, then B is the son of A", or "Every animal has exactly one father, and the father of an animal is always himself an animal". Ontology representations can talk about types of concepts (i.e. cats, mammal) but they can also talk about properties (the "is father of" property, for example).

The line between "ontological" statements and simple logical "statements" (e.g. "John has exactly one car, and his car is a Buick") can be fuzzy. In short, ontological statements are kind of "meta". In more detail, ontological statements have to do with the properties of the vocabulary that we use to define things, rather than contingent properties of this particular world.

Ontologies vs. Ontology Representations

"Ontology" is a word taken from philosophy. "Ontologies" are cognitive and psychosocial phenomena. Since we don't know how our minds work, these things are not well characterized.

"Ontology representations", on the other hand, are engineering artifacts inspired by the philosophy and cognitive science. They are merely models of one interpretation of the original idea of "ontology". In the contexts of the SemanticWeb, LibraryScience, or ArtificialIntelligence (and on this wiki), the word "ontology" is usually used as a shorthand for "ontological representation". But don't be misled; what is meant is usually not the broad and ill-defined philosophical concept of "ontology"; rather, we are just talking about a kind of data representation. It is not necessary for the engineering concept of "ontology representation" to correspond with what our minds are actually doing with regard to the philosophical concept of "ontologies".

Why?

Ontologies are useful for interoperability (you want to tell the computer, "when this standard says 'author', that means the same as when that standard says 'creator'"), InformationRetrieval? (see ControlledVocabulary), and artificial intelligence1.

Discussion

Why is it, that one can always read about ontologies and ontology representation but see it so rarely in real life? Are Ontologies like the legendary Yeti snow man? Everyone talks about it, but no-one is able to show a convincing proof (of concept)? Where is the real world problem that is shown to be solved by the application of an ontology?

I'm betting that there are plenty of demo applications nowadays but that I just don't know about them… will report back here when I find them…

A quick search is coming up with a couple of categories of applications:

  • InformationRetrieval?; ontologies are used as a ControlledVocabulary for MetaData to classify information resources. Some examples at http://esw.w3.org/mt/esw/archives/cat_ontologybased_searching.html. One example is the CreativeCommons ontology which allows the construction of machine-readable MetaData about what you are allowed to do with a document. Another example is Flink (paper Google cache of paper (when they say "ontology", it seems to mean just what we mean by "vocabulary", though; I'm not sure if they actually use the machine-readable info about the definitions of the terms in the vocabulary). Another example is Bibster (paper) (example showing that Bibster is actually using the formal semantics in the vocabulary: "(3) When navigating the RDF graph, SeRQL? exploits the formal semantics of path labels. For example, <rdfs:subClassOf> is interpreted as a reflexive transitive relation and upward inheritance of instances is interpreted (through the <rdf:type> relation (cf. [5] for full details)."2.
  • SemanticWeb / data interchange: ontologies are used to achieve "automatic interoperability" between different formal languages / data formats
  • MultiAgentSystems?: ontologies are used as languages that automated agents use to communicate with each other and especially to describe themselves to other agents; for example, web service composition tools can use ontologies in formal descriptions of MetaData about web service agents. .
  • A.I.: ontologies are used as part of a formal logic inference system (see http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0083.html for some examples of the multi agent systems and A.I. uses)

By the way for SemanticWeb applications check out Semantic Web Challenge.

Well, wait --

Just about every data structure in the computer world is an "ontology."

Like in Monty Python's Quest for the Holy Grail: "It's only a model."

If we have a simple data structure, Person:

 struct Person {
   char first_name[20];
   char last_name[20];
   int date_of_birth_seconds_since_epoch;
 }

We've just created an OntologyRepresentation.

Is it not an ontology because it's not expressed in ESW:RdfXmlSyntax?

Excuse me. :)

 (insert big block of RDFS here that describes
  class Person has:
  * first_name (String)
  * last_name (String)
  * birthdate (Date)

Perhaps it becomes an "ontology" when we have some ideas on what we want to do with it. A thing can be a different thing, because it's in a different context, after all.

Things we expect to do with ontologies:

  • Extend them with new data types. (Perhaps we want to say that a particular Person has a cat, and so we need to extend the model a bit.)
  • Dynamically translate them to other forms. (Perhaps some other model and reasoning does age, instead of birthdate.)
  • Merge datasets underneath the model.
  • Introspect over the data model.

Note that you can do all these things with our C "ontology." The point is that it wasn't set up for that.

So in the SemanticWeb, when we're talking about Ontology, you can replace the word "Ontology" with "Data Model," and add a note to yourself that we have this certain set of expectations- the things we're going to do with it.

Other than that, there is little difference between "ontology" and a data model.

Note that the properties described above, the things we want to do with ontologies- naturally lend themselves to graph relationship diagrams, rather than tables (relational model, structures in RAM.)

Bayle, if you agree, I'd like to work the last explanation I made into the DocumentMode text.

And perhaps throw it onto the ESW wiki as well; I keep thinking: "What a waste this work isn't all going onto the ESW wiki!"

I think pretty much the entirety of CategorySemanticWeb? belongs there, more or less..!

I don't think that the person example above has something to do with an ontology - RDF, XML or not. Otherwise each and every database would be a work of ontology. Ontology probably answers the question "what do we know about the term person, independent from its definition (which is terminology) or its description". Databases are about describing instances. Ontology doesn't care about instances. Ontology could be probably seen as a theory of a database structure potentially describing everything (independent from specific application needs). But the problem of Ontology is that it builds on language and implicitely assumes that language is logical or consistent, which it isn't.

Okay: to repeat what I understand: you're saying an ontology is:

  • the model
  • plus the information about the relationships between the elements of the model

I'm not so worried about the consistency of language.

At the 45th street clinic, they have transgendered patients. But their database will only accept M or F! What are we to do?! Well: they don't throw out databases.

And: we shouldn't throw out ontologies. They're useful, even if they aren't perfect.

http://en.wikipedia.org/wiki/Ontology_(computer_science)

If a patients database is ontology then typing a phone number is mathematics.

no-no-no- I concede that point.

I agree: The ontology includes notes about the relationships between the elements of a model, yes?

But I'm saying: I think they're still useful, even if they are imperfect. I'm noting this point, because at the last Python meeting, the subject of the Semantic Web came up, and an AI researcher said, "augh! Semantic web! No!" And we asked why, and it was this same point that came up, the same point that's come up in a million places: "But it's not AI! It's imperfect! Even humans don't know what they're talking about!"

And I repeated my argument: Technology doesn't have to be perfect to be useful. And I brought up the story about the 45th street clinic and the transgendered patients.

His was a log: "If I'm in a forest, and ask for a chair, it's not going to point out a log." And I said: "I'm okay with that. I'm fine if it can just point out stuff in Ikea. We can worry about the forest log chair later."

Actually, he was mainly upset because so much research was going into the Semantic Web, and he was basically bored of descriptive logics, which is a well researched, well understood, well covered territory. I told him that we're excited about the Semantic web because it's relatively easy.

Lion, I fully agree. Perfection is not important. But why does it increase the usefulness of a database aspect if it's called ontology? For me it decreases the usefulness of the word ontology because it is deprived of its meaning.

Links

Research institutes and groups


CategoryAboutOntologies?

CategorySemanticWeb?

Footnotes:

1. Actually, the question of whether knowledge representation in the form of formal logic is actually useful in artificial intelligence is controversial. But there are people interested in trying to use it for this purpose.

Define external redirect: SeRQL CategoryAboutOntologies MultiAgentSystems InformationRetrieval CategorySemanticWeb

EditNearLinks: ArtificialIntelligence DocumentMode

Languages: