It used to be that snapping a photo and getting an instant print-out was quite the novelty – but what if instead of producing a photo, you could create something else?
Using cognitive services, our Redweb Labs team set out to create a camera that could take an image, interpret it and describe it. After much tinkering and toying, what they produced was pure poetry: a Haiku, a three-line verse – five syllables on the first and last lines, seven syllables sandwiched in the middle. Here’s how the Haiku Camera was born.
Our Redweb Labs team poke, prod and play with the latest technology to give our clients a competitive advantage. With cognitive services coming on leaps and bounds and increasingly impressive APIs becoming more available – Application Programming Interfaces that help us access tools, technologies and data – it was time to get creative. Because these services are still developing, the first experiments produced some unexpectedly poetic (and admittedly quite funny) responses. These happy accidents sparked an idea – why not turn the descriptions into haikus?
With the idea in place, it was time to get making. Building on the Raspberry Pi Zero, the Redweb Labs team added: a Pi Camera Module; an Adafruit Thermal Printer; a two-port USB battery pack; an 8GB Micro SD card; a little breadboard and some LEDS, switches and wires to get it fired up and connected. After several prototypes and rounds of testing, the team designed a custom plastic case to house the Pi and all its additions, making sure to allow access for the inevitable updates, tweaks and repairs.
Switching the camera on activates the Pi Camera and displays a green LED – that means it’s ready to get snapping. When you hit the shutter button, the camera takes a picture and a blue LED illuminates while it waits to receive information from numerous image-recognition APIs by Google, Microsoft and Clarifai. Each of these feeds back details about the photo – from general settings and object to the age, gender and emotion of people. With all that in place, the haiku creation begins.
Redweb Labs opted for a ‘fill-in-the-blanks’ approach, populating the gaps with data from the APIs that had been turned into an extensive list of tags and derived scores. At this stage there are hundreds of potential lines of haiku, but it’s the ones with highest scores and right syllable count that are combined to create the final verse. This all gets sent back to the camera, ready to be printed.
Haiku Camera is a great example of technology’s potential to break new ground and create exciting, engaging experiences. Since Haiku Camera, we’ve been experimenting with other ways to utilise the power of cognitive services and pass that knowledge on to our clients. From automatically generating alt text for web images to real-time transcription on video playback, we’re always looking for ways to enhance our services with powerful technologies.
“We’re really pleased with the feedback we’ve had, and off the back of the success of the Haiku Camera, we’re now working on a sister app that will let anyone send in a photo to the Haiku Camera and receive poetry back. Now that cognitive services are becoming more accurate there are plenty of possibilities to use them in client work – watch this space to see what we come up with next!”
Since its creation, the Haiku Camera has enjoyed outings to Bournemouth University’s Graduate and Student Placement fair, as well as Redweb’s own conference, Digital Wave. The camera attracted a lot of attention and was a real talking point with some local students, foregoing treats from other stands in favour of seeing ‘the poem camera’ in action. Students of Bournemouth University were especially interested in the computing behind the camera – we hope it will inspire students to explore cognitive services further and get creative with their own projects. Fear not, if you don’t have the camera in your hand, you can still join in with the action on twitter, by tweeting @haikucamera_bot a photo and it will reply with a haiku.
Road trips to test the Haiku Camera