The recently launched OIDA Image Collection highlights images found within OIDA and includes a description of each image, but it would have taken an enormous amount of time to write a description for each image. Therefore, the OIDA team used artificial intelligence (AI) to write captions for these images to make them more discoverable in the OIDA Image Collection.
But we need your help! We have generated captions using two different AI models and need to decide which AI-generated caption is better for use in the OIDA Image Collection. Thanks to support from Hugging Face, a platform for collaborating on models and datasets for machine learning, and its Argilla data annotation tool, we have created a handy interface for voting on the quality of image captions. To help us out, you’ll just need to create a free Hugging Face account.
Your image labeling efforts will contribute to an open preference dataset, crucial for "steering" AI models towards generating more useful outputs in specific domains.
“Projects like this ensure AI becomes useful for a wider range of audiences, aligning with Hugging Face’s mission to democratize machine learning and make AI more accessible and impactful across diverse fields,” said Daniel van Strien, machine learning librarian at Hugging Face and OIDA National Advisory Committee member.
Vision Language Models (VLMs) represent a cutting-edge field in AI, and your contributions will enable the development of more specialized models for important applications such as captioning large archival image collections. By participating, you're not just helping OIDA – you're shaping the future of AI to better serve specialized communities and enhance visual information accessibility for a wider range of document types.