I ran the Houston half-marathon yesterday and here’s an obvious idea that came to me: photographs should be crowd-sourced!
With more than 20,000 runners, I’d estimate there are probably 5,000 to 10,000 cameras shooting photographs from the sidelines– the friends and family of the runners. The Chevron Houston Marathon should host a site where people can upload their photographs and the site would have the needed bib number recognition software to tag the photographs and make them searchable by bib#. How hard can that software be?
And the Cheveron Houston Marathon could do a *ton* of things with the photographs that are taken, for the purposes of marketing and promoting the race… Give runners the option of linking their marathon registration with their Facebook accounts and runners could check a box to allow the Marathon to post photographs of them.
And while we’re talking about it, why can’t the race organizers do a live stream of the entire run and also tie-in bib-number recognition so people can watch their friends and family run from home, from their iPhones, etc.? Maybe a harder technical problem and a harder problem from an infrastructure standpoint (ie you’d have the codec problem of supporting all viewing platforms, and in general it’s harder to build an at scale video sharing site than a photo sharing website), but for similar reasons, it’s worth doing.
I assume the organizers make a ton of money selling the official photographs. Why would they want to help you avoid paying them?
First, I doubt they make very much for the race organizers– the price to download the 8 photographs that Brightroom took of me: $70. And then individual prints are in the $7-15 neighborhood! I’m guessing that the value the race organizers would get out the race greater promotion of the race (ie they could get Aramco to pay more for sponsorship!) would exceed what they make sharing revenue with Brightroom.
Funny, but this is basically a transcription problem. (Which Ben has been working on for 6+ years now.) If the images aren’t machine-transcribable, you could do it with real people pretty easily with the Zooniverse platform.
Stay tuned… we are working on turning manuscript-transcription-as-hobby into manuscript-transcription-as-a-full-time-product-business.
Great idea. With the location data found in many images together with time stamps, it would also be possible to find photos from when a runner was passing a given location. Microsoft Photosynth might be able to build a cool course out of the photos.
Sara: I hadn’t heard of Zooniverse– Amazon’s Mechanical Turk could be used too for human transcription of bib numbers. Let me know when you launch your manuscript transcription thing– sounds interesting!
Ed, a friend of mine suggested the Photosynth idea on Facebook– that would be wild, wouldn’t it? I think the blocker on a lot of these ideas is making it easy to upload photographs. It’s still such a pain for the average person to get photographs off their camera and uploaded to the web!
all geeking out aside, congrats on the run! that’s a terrific accomplishment!
Thanks Soham! It was fun– and I can walk up stairs again. 🙂