Harness the power of artificial intelligence and machine learning to auto-tag visual media.
- Seamlessly integrate artificial intelligence and auto-tagging into Picturepark.
- Save time by allowing deep learning algorithms to do the hard work.
- Make searching for visual content simple, straightforward and intuitive.
- You have the power to customise the metadata categories that work for your needs.
The Connector for Clarifai has full compatibility with both the cloud and on-premise versions of Picturepark.
How it Works
Clarifai uses deep learning artificial intelligence technologies to visually interpret images. This connector gives you the power to harness Clarifai’s advanced AI technology within your own Picturepark environment.
Save Time and Maximise Efficiency
Manually inserting tags can take up a high proportion of time, whilst the task itself is often tedious. Free up man hours and drastically reduce the need for repetitious workflows. With the use of AI and auto-tagging, you can instead spend time on curating and organising content.
Personalise Your Content
We understand that needs between organisations vary. With Picturepark, you have complete control over how your own metadata is auto-tagged. You have the option to use pre-existing taxonomies or insert entirely new ones.
As a super user, you can choose to use ignore keywords on a specific basis or by entire keyword groupings. You have the power to moderate how your content is tagged, which keywords to ignore and which keywords you would like to focus upon.
Recommend & Guide
Intelligently generated ‘bucket’ keywords recognise areas in your visual media that have not been tagged before. Bucket keywords improve taxonomy, leading to an increase in the accuracy of your tagged content; making searching for that perfect piece of content easier and faster.
Make Searching a Breeze
When combined with semantic search, the Clarifai connector makes looking for content even more straightforward. Different users in an organisation have a range of different searching styles and we recognise this. In searching for an image, one user at your organisation might search generically using only the keyword ‘dog’; missing out on an image tagged specifically as ‘Yorkshire Terrier’. Child and parent tag groups mark the end of low quality visual search results, to use the example of the case in hand; visual content would instead be tagged as the following:
- Child tag: Yorkshire Terrier
- Parent tag: Dog
- Grandparent tag: Mammal
- Great-grandparent tag: Animal
This means that search is faster-than-ever, more accurate and works with the varying searching habits of a broader user group, enabling them to search the way they want and not how systems enforce it.