Date of Award
Bachelor of Science (B.S.) in Computer Science and University Honors
Natural language processing (Computer science), Machine learning -- Evaluation, Image processing -- Data processing
There have been numerous efforts to accomplish the task of visual grounding (Deng et al., 2018, Johnson et al., 2015, Krishna et al., 2018), the act of matching regions or objects within an image with natural language queries. But with each method released, there is a growing uncertainty about the effectiveness of the machine’s learning. Are computers learning what we expect, and are datasets properly testing this learning? (Cirik et al., 2018). In this thesis, I analyze the visual grounding method of “Referring Relationships” (RR) by Krishna et al. (2018). I find that RR’s relationship information does not have a significant positive impact on performance as compared to a baseline model that only detects objects. In addition, I find that the Visual Relationship Detection dataset (VRD), one of the datasets used in the original paper, exhibits bias. In other words, it allows methods that do not utilize relationships to perform well, showing that the VRD dataset is not able to properly test the RR method.
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
Hahn, Kennedy, "Analyzing the Visual Grounding of "Referring Relationships"" (2019). University Honors Theses. Paper 779.