PDF

Description

Aligning vision and language is an important step in many applications; whether it's enabling the visually impaired to navigate the world through natural language or providing a familiar interface to otherwise opaque computational systems, the field is ripe with promise. Some of the largest roadblocks to realizing integrated vision and language systems, such as image captioning, are prediction artifacts from the training process and data. This report will discuss two weaknesses of captioning systems: the exaggeration of dataset bias related to gender presentation and the “hallucination” of objects that are not visually present in the scene.

The first chapter focuses on correcting the salient issue of gender bias in image captioning models. By introducing loss terms that encourage equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present, we can enforce that the predictions are not only less error prone, but also more grounded in the image input.

In the second chapter, we broaden the lens of our analysis by developing a new image relevance metric to investigate “hallucinations”. With this tool, we will analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination.

Details

Files

Statistics

from
to
Export
Download Full History