June 29, 2007
I definitely have more to say about my experiences at Hack Day London. Between a busy new project at work, an elbow injury that makes my typing less than half as efficient as normal, and all the usual summer things going on, I’ve been finding it hard to find the time (and the will) to fire up the computer in my (brief) free moments to write down my thoughts. I will do it soon, though, I promise.
I plan to write about the session I went to on Geo mobile apps, the hypothetical hacking I did with my friend Chris, and some of the cool hacks that were on display at the end. Check back soon.
June 21, 2007
The first talk I was able to attend at Hack Day was by Flickr’s Aaron Straup Cope and Dan Catt, about Machine Tags. I’m really interested in this because it adds another layer of metadata to tags, allowing them to be read by machines. I’ve heard them described as triples, and in a way I suppose that’s true, but these are not like RDF triples. Basically, a machine tag consists of a namepace, a predicate, and a value organized in a certain syntax. It’s pretty simple, but should allow services to make use of the additional data pretty easily. I scribbled a note on my paper that says:
- folksonomy :: taxonomy
- machine tags :: RDF
That’s a simplification, of course, but it seemed to be a good way to describe the relationship. The two main issues that will affect the adaption of machine tags are:
- What can you do with them? I think the answer to this one is pretty wide open. You can make apps that will use machine tags to express relationships between content, people, etc, and trigger all kinds of behaviors. Flickr’s API lets you query machine tags, and basically what you do with it is just limited by your imagination.
- Where is the data coming from? The question is, though, will anyone aside from you, be adding the kind of machine tags that will make your application work? This is really two questions.
- What’s going to make me go back into nearly 1500 photos and add more tags to them? Something needs to be done to make this a little easier or people will never do it. I’m a pretty dedicated information geek. I’ve spent an hour disambiguating two names on Wikipedia. But I’m already dragging my feet adding my backlog of Flickr photos to the map, I can’t see sift through all of them again.
- Even if I do, what’s to say my machine tags will be compatible with your application? Do we need some kind of standards? Or are we expecting people to add new machine tags to their content for each application they want to contribute to?
Clearly, what’s needed is something that will assist users by automatically generating suggested machine tags that they can then revise, approve, or decline. Interesting things to think about at Hack Day…
One of the big winners of the day was a hack that used machine tags - Flickr Tunes by Steffan Jones. Basically, it was a Mac OX widget that used the BBC Muiscbrain database (I think) and the Flickr API to match machine tagged photos with a song. So, if a person took a photo that they felt illustrated a particular song, and they used the appropriate machine tags to capture the song name (and even a time code), then the images would display while the song played, as a sort of slide show, even keying to the specific moment in the song, if indicated.
Pretty cool, but as I mentioned, how much data would need to be entered to make it a valuable experience for fans of all different kinds of music?
June 21, 2007
So, Hack Day was pretty amazing. The event was organized by Yahoo! and the BBC, with several other sponsors providing interesting thought capital, demonstrations, inspiration and prizes.
The event was meant to start out with 4 sessions of 1-hour presentations by various people at BBC, Flickr, Yahoo, and other places. They demonstrated APIs, available data, and other services available for all the attendees to hack up and play with. I think there’s too much for me to cover in one post, so I’m going to break it up a little and talk about each of the interesting things that I saw, heard, learned about, and hacked.
I missed the first session, because it took me a lot longer to get from Heathrow to Alexandra Palace than I expected. But while I was signing in, my friend and one-time coworker, Chris Sizemore, popped over and said hello. He works for the BBC, so he was partly there in the capacity of helping to make the event run smoothly.
By the way, Alexandra Palace is gorgeous, with an amazing view of London. This high vantage point would turn out to be a bit of a problem later in the day, but in the early morning, it was pretty impressive.
For more about the event, see my posts on:
June 15, 2007
This is kind of old news, but I wanted to include a link to the New York Times article that made me realize that Semantic Technology is on the verge of breaking through. I especially like this bit:
“In its current state, the Web is often described as being in the Lego phase, with all of its different parts capable of connecting to one another. Those who envision the next phase, Web 3.0, see it as an era when machines will start to do seemingly intelligent things.”
June 14, 2007
I’ve been meaning to write about my presentation at the Semantic Technology Conference last month. It was called Representing Taxonomy: What Am I Looking At Here? The idea I was trying to convey is that, as semantic technologies become more widely adopted, we’re (sometimes) going to have to come up with data to drive our semantic applications. We’re also going to need processes for documenting that data and communicating about it to our clients and stakeholders.
June 1, 2007
I added a few things to the Resources page.
A few new tools, like Piggy Bank – an open source tool for gathering data. I haven’t tried it, but I heard a couple people mention it at the Semantic Technology Conference. Sounds like a good way to gather data for prototypes and proof-of-concept projects.
A couple Semantic Wikis (Visual Knowledge and Knoodl.com). I’m very interested in this approach because it seems like a manageable entry into the world of building semantic content. People are already kind of familiar with the way to interact with a wiki, and this overlays a powerful layer of functionality. People will be building semantic content and applications without even realizing it. Unfortunately I think that both of these products have a little way to go before they are fully presenatble, but it’s great to be able to go in and play around with them. Alice in MetaLand is a budding semantic community built on the Visual Knowledge platform.
I also stumbled across this Semantic Web FAQ, which seems to be a work in progress, but has some potential. Check it out and add some things, if you feel so inclined.