Joho the Blog » [foo] Designing shared ontologies
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[foo] Designing shared ontologies

Jason Cole lead a session on how to design a tool he wants to build. He says most academic technology is aimed at teachers. How do we build tools that support learning? His wife just started grad school and wants a tool that will build a personal knowledge base. But it should work with her friends’ bases. And you’d like to find others working on the same issues. That means merging disparate ontologies.

What do you do about degree of belief? Ontologies are binary, but people don’t think in that binary way. “The problem with monolithic ontologies is that interesting ideas get averaged out.” RDF doesn’t allow degrees of belief, but there are other W3C specs that can be layered on top to do that.

Suggestion: Create multiples of RDF relationships, e.g. “X with_.01_probability_is_a Y.”

Allan Noren: The Adaptive Book Project from CMU lets a student rearrange a syllabus in a way that makes sense to her.

(Then the conversation got interesting in a variety of directions and I forgot to record it.)

In the last ten minutes, we brainstormed features: Should look like Google, should spider for counter-examples, should require no extra work to build the ontology, retain the link from source when items are moved…

Previous: « || Next: »

15 Responses to “[foo] Designing shared ontologies”

  1. That’s precisely the problem, as far as I’m concerned. I don’t share my ontology with anyone, never mind a stranger.

  2. RDF is just nodes and edges. And it has no means (AFAIK) to qualify an edge or describe it’s strength except to introduce an intermediate node.

    This has caused a problem when describing relationships. Several people have looked at foaf:knows and wanted to sub-divide it. They typically sub-class knows into tens of equally rigid edge types like “childOf”, “AcquaintanceOf”. What the world needs though is something like foaf:knows value:0.6

    There’s a bigger problem here to do with standards adoption. Anyone can create a new namespace and ontology. And they can start publishing data using it. But if nobody reads it or knows how to read it, then it’s write only data and just so much academic wanking. So each standard and each tag within each standard has to compete in the standards ecology for mind share and implementation. So there are only “De Facto” standards with everything else being just interesting papers.

    It’s been said that “Any problem in computing can be solved with another level of indirection” So work is being done on languages to allow you to infer meaning from a new ontology schema at run time that you’ve never seen before. That may solve the problem of needing to understand that childOf is a sub-class of foaf:knows. But it cannot solve the problem of understanding what childOf means. So we get thrown back onto human programmers reading human-readable specs and standards competing for mindshare.

  3. actually, i think that designed or prestructured ontologies do not work for collaboration in this sort of a system. my idea system allows users to develop their own ontologies, ie socially construct an ontology, and then while they do that, over time it builds the relationships using a dictionary and concept analysis of the information provided. if you fix things in stone when you start, then you are going to run into trouble, because most people don’t think the same or use the same tools to think about something, so as soon as you say ‘shared’ above, you require learning and negotiation of the ontology, which rules out predesign.

  4. Julian, just to make sure I was being clear: The suggestion at the session was the equivalent of introducing knows00…knows99, precisely because rdf doesn’t allow foaf:knows and value:0.6. And your final paragraph puts well the problem under discussion.

    It’s a problem because, as Jeremy says, you want to allow ontologies to grow “organically,” rather than stipulate them at the beginning. Jason’s initial idea was that each person would implicitly develop her own ontology, and then the system would (magic happens) figure out how to map them to one another. The difficulty of finding the right magic was the subject of discussion. Some in the group suggested instead that the participants work using the same form, implicitly developing their ontology socially so there’s no need to normalize afterwards.

  5. At OSCON during “What Book Sales Tell Us About the Tech Industry”, Tim and Roger told us about developing an ontology of tech books as represented by a dimensional model.

    One thought that occurred to me, both in reference to their specific data warehouse and in reference to the open data warehouse Roger mentioned during his talks, was that you can substitute a different ontology with a different model. Then you can make analyses and predictions based on various models, and look to see which models did the best job of describing reality.

    Mostly, we don’t do this in data warehousing–but we ought to.

  6. Yet, with this said and understood, still, the professional schools are so unsure and insecure about the relevance of their own activity, of this highest urge of the mind, manifest in the delicate and glorious history of philosophy that has been handed down in the greatest literature of all time, that capitulation is universally and unanimously made, in the final assessment of the whole affair, to the study of symbolic logic as the only and ultimate justification of the entire matter; as if this capitulation, in one magical abracadabra, will justify the philosopher, and the very endeavor itself; as if this petty bow, in the final tally, is what alone grants self-worth to the entire splendid enterprise. It is as if we must renounce the implied liberation of the study itself [p.14, The Life Of The Mind, Hannah Arendt], surrender like zombies in the “intramural warfare between thought and common sense,” create a victorious alter-ghost, make of it our master, and submit ourselves to it like children, because we are afraid of our own earned truth, and frightened that we might actually be as limp as others say we are; as if formal logic should essentially supercede the formative effort of the concept, the shell somehow more desirable than the muscle–not to mention the pearl, and as if the mysteriously abstract signs of symbolic logic, like a new pig Latin, will protect us and deliver us from the raucous laughter of Thrasymachus, and save us, once and for all, from having to stand apart from the crowd. Do you wonder why other “academics” doubt you so? Can you blame them when you are afraid to run your own marathon–and to win it! E’pure si muove!

  7. Don’t put degree of belief information into propositions. There’s quite a bit that can be drawn from existing logical notation to help here.

    For example, let P represent A is a B

    Then: Pr(P)=0.5
    Or Pr(A is a B) = 0.5

    This is useful for other forms.

    Necessarily, A is a B
    L(A is a B)

    Possibly, A is a B
    P(A is a B)

    John thinks that A is a B
    T(j)(A is a B)

    The advantage of thsi sort of notation is that it can be nested.

    It is necessary that John thinks that there is a 0.5 probability that A is a B

    L(T(j)(p(A is a B)=0.5))

    In general, the propositional attitude will have to be described as an element in a vocabulat, since there are so many (must, might, may, could, would, should, believe, hope, fear, want, wish…)

  8. Stephen, does rdf enable the expression of these relationships?

  9. Shared ontologies

    Paolo , Matt you should talk to David about the k-collector :-).”qu” [foo] Designing shared ontologies Jason Cole lead a session on how to design a tool he wants to build.

  10. So, Mr Bond, you are beginning to discover that there might be a “little problem” with ontologies?
    LOL

  11. Uuuggh! Switching He and She is like battling over the toilet seat. It is insipid.

    Example 10
    Reason is what distinguishes man from other animals. Reason is what distinguishes humans (human beings) from other animals. When ‘man’ is used to contrast species, substitute ‘humans’ or ‘human beings’. Use ‘who’ for ‘he’.

    Example 11
    For Aristotle, man is, above all, Political Man. Aristotle regarded human beings as inherently political. No nonsexist counterparts to ‘Political Man’, ‘Economic Man’, etc. preserve the exact flavor of these terms-perhaps because they focus on stereotypically male behavior. Note that much of ‘Economic Woman’s’ labor is still unpaid, and hence is excluded from the G.N.P. Sexist language may camouflage a theory’s sexist assumptions.

  12. Perhaps the right way to do this is really as suggested by among others Dave Winer: http://davenet.userland.com/2002/06/02/theGooglishWayToDoDirectories
    A g o o g l e approach to ontology-likeness.
    (I couldn’t write the name of the worlds favourite search engine because of your spamfilter)

  13. I didn’t know I have a spam filter that blocks free text! I’ll look into it. Sorry for the problem.

  14. I know you’re using “ontologies” in a specialized way, but as a religion geek, when I see the word “ontologies” I think of the term in its religion sense — the laws of being, the way the world works. In a way, every religion is a shared ontology…

    Totally unhelpful, I realize, but I like the crossover of terminology, so I figured I’d mention it.

  15. Do you have a sense of how formal an ontology his wife is aiming for?

    Using WikiWord-s can at least provide an easy way to link across individual spaces…

    http://webseitz.fluxent.com/wiki/OntolOgy

Leave a Reply

Comments (RSS).  RSS icon