PicsLikeThat

Combining keyword search with visual similarity search and automatic image recommendations.

Image navigation the better way

Normal keyword image search systems display sets of 20 to 50 images on separate web pages. Performance is drastically affected when searching for images with particular attributes, because both the semantic relationships between them and the user's intention are unknown to the search system. Homonyms and incorrectly assigned keywords are yet another problem. Usually people do not look at more than 2 or 3 result pages.

Due to the visual sorting, PicsLikeThat can show several hundred images allowing easy inspection. In most cases this is sufficient to get a good overview of the entire search result set. The user can quickly identify desired images, which are used to refine the result by retrieving visually and semantically similar images.

PicsLikeThat uses a semantic network that is learned from the visual appearance of the image, the image keywords and by tracking user interaction.

Workflow

1. Assume you intend to find pictures with a palm tree at a beach. A search request for "palm" results in various distinct result categories.
2. Double-clicking on any image similar to your desire will serve you more "pics like that"
3. Within a few clicks, the request is narrowed to the desired category.