Cataloguing meta data against multi media formats

    From the not my area of PhD research, but nonetheless an interesting field department. I was having a chat with Tom Worthington, one of the visiting fellows at the Department of Computer Science at ANU, and he was talking about some students of his who are working on meta data entry and use for multi media files. This is an interesting topic as it's tangential to what I do all day at work, and an interestingly hard problem.

    Imagine that you have all the footage for a film -- for instance Mad Max, including the stuff which ended up on the cutting room floor. I've always made the assumption that you'd store that in your electronic content management system. Somehow when you entered the movie footage into your system, you'd assign little chunkettes of it meta data, which would be stored by the system.

    Well, it's not all that magical. You'd presumably have some sort of human assisted automated process to determine the meta data, and then add that into your system as well. This would probably involve something like Google's teletext indexing stuff, but would hopefully also do interesting stuff like informing the operator that the back ground of an image set had changed sufficiently to imply a change of camera shot and therefore a different scene.

    Anyways, you could then search in your electronic content management system for the meta data you needed -- "give me a close up shot of Mel Gibson from the first 25 minutes of the movie against a desert background" -- and the right thing would happen. That is, the content would be streamed to your machine, and then playback would start from that place.

    Anyways, Tom suggested that perhaps you store the information pre-carved into those little chunks and index them separately. You would of course retain information about the sequence of the snippets so you could reassemble them later. That's an interesting idea, especially as it solves a lot of the problems of moving very large volumes of data around to get the 10 second fragment you actually want. If my TiVo is any indication, medium quality video runs to about a gigabyte and hour, so I imagine movie quality is a lot bigger than that. We're talking serious amounts of data.

    Now, this approach is interesting, and has it's merits. It's not perfect however. You'd need to have some sort of re assembly algorithm with allowed for you to reassemble multiple fragments before delivery. That's probably not a big problem though, as you'd need that anyway for a system like this.

      Aside: what do you use a system like this for? I want to make a documentary about Mad Max. I could do that by building a Makefile for the documentary which took the following information and presented it to me as a finished movie:

      • An opening sequence I made especially for the documentary, overlaid with some theme music
      • A close up stock shot of Mel Gibson
      • Some footage from the movie
      • A narration sequence with some diagrams of Max's car
      • ... and so on


      This would remove a lot of the repetitive editing work from this style of production, and would make it easy to tweak the presentation when new information came to light. You can't assume that the content is for traditional broadcast either -- I'm thinking this kind of stuff could be delivered using the newer, faster networks that most academic places have today, and everybody else will presumably have in a decade or so. Heck, imagine a pod cast which could be tweaked on the fly based on delivery variables. A personalised G'Day world pod cast for instance which drops the segments you don't like for instance.


    The other place that this technique falls down in instances of "single scene" content such as the police interview after my car accident. In those cases you're still going to need to be able to store an inset into the file for the given search term.

    Interestingly, TOWER is well positioned to do a lot of this now... We can byte serve files from some beta code (which I should really mention here) that we're working on now, and there are some very sexy batch search facilities in the Connectivity Toolkit that I've been working on that would make this easier too.

    I think a hybrid scheme between the two storage techniques has a lot of promise, and that we should play with this a little more. Tom has a research project lined up, so it's just a case of pitching it to the powers that be at work now.

    On a side note, Google believes that there are no pages on my employers site that define the term electronic content management. That makes me sad. If someone would like to correct me, then I would be more than happy to donate some Google foo to the indexing of such pages.

    Tags for this post: blog work research multimedia metadata content management
    Related posts: Gartner recommends blogging over electronic content management; Why document management is good; RemoteWorker v70; Measuring the popularity of SMTP server implementations on the Internet; Technorati porn tags; Are license tags common in web pages?

posted at: 01:10 | path: /diary | permanent link to this entry