Akiwi uses a huge collection of 22 million images tagged with keywords. From these images, akiwi retrieves those that are visually most similar to the uploaded sample. Based on existing tags, akiwi predicts corresponding keywords for the unknown picture.
This approach works very well, supposing most of the retrieved images have similar content. Nevertheless, if no correlating images can be found in the database, this approach may not provide the desired results. Akiwi was designed in such a way to cope with this problem. With a little assistance from the user, akiwi can focus on the correct content type.
To work with akiwi is simple as it can be.
Whenever the keywords fit your expectations, click the finalize button to copy the proposed keywords to your clipboard.
No user interaction is needed if most of the visually similar images show the same content as the uploaded image:
If the visually compared images have different content, the user needs to indicate manually which ones are similar the query image:
In cases where none of the visually similar images matches, no image should be clicked. The keyword that best describes the uploaded image should be added:
akiwi is a student project from HTW Berlin (University of Applied Science). akiwi was developed by Jonas Hartmann, Nico Hezel, Mike Krause and Anja Sonnenberg under the supervision of Prof. Dr. Kai Uwe Barthel. We provided our visual search technologies.