Description
The first chapter focuses on correcting the salient issue of gender bias in image captioning models. By introducing loss terms that encourage equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present, we can enforce that the predictions are not only less error prone, but also more grounded in the image input.
In the second chapter, we broaden the lens of our analysis by developing a new image relevance metric to investigate “hallucinations”. With this tool, we will analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination.