Read Anywhere Pointed:
Layout-aware GUI Screen Reading with Tree-of-Lens Grounding

Yue Fan†,1, Lei Ding1, Ching-Chen Kuo2, Shan Jiang2, Yang Zhao2, Xinze Guan2, Jie Yang3, Yi Zhang2, Xin Eric Wang†,1,
1UC Santa Cruz, 2eBay, 3Cybever

†Corresponding to: yfan71@ucsc.edu, xwang366@ucsc.edu

ToL agent short gif

Our ToL agent describes the region on the screenshot indicated by a point from the user.

The generated descriptions includes important layout-information, which is critical because without it, one cannot distinguish the two identical "Tumbler pack" shown.

Abstract

Graphical User Interfaces (GUIs) are central to our interaction with digital devices. Recently, growing efforts have been made to build models for various GUI understanding tasks. However, these efforts largely overlook an important GUI-referring task: screen reading based on user-indicated points, which we name the Screen Point-and-Read (ScreenPR) task. This task is predominantly handled by rigid accessible screen reading tools, in great need of new models driven by advancements in Multimodal Large Language Models (MLLMs). In this paper, we propose a Tree-of-Lens (ToL) agent, utilizing a novel ToL grounding mechanism, to address the ScreenPR task. Based on the input point coordinate and the corresponding GUI screenshot, our ToL agent constructs a Hierarchical Layout Tree. Based on the tree, our ToL agent not only comprehends the content of the indicated area but also articulates the layout and spatial relationships between elements. Such layout information is crucial for accurately interpreting information on the screen, distinguishing our ToL agent from other screen reading tools. We also thoroughly evaluate the ToL agent against other baselines on a newly proposed ScreenPR benchmark, which includes GUIs from mobile, web, and operating systems. Last but not least, we test the ToL agent on mobile GUI navigation tasks, demonstrating its utility in identifying incorrect actions along the path of agent execution trajectories.

Tree-of-Lens (ToL) Agent

agent

Pipeline of the Tree-of-Lens agent. The Hierarchical Layout Tree is first constructed based on detected global and local regions from the input screenshot. Then, a set of hierarchical lenses with various field widths is generated from the selected target path in the tree and sent as visual prompts to GPT-4o to generate the content and layout descriptions.

Screen Point-and-Read (ScreenPR) Benchmark

stats

Screen Point-and-Read (ScreenPR) benchmark is introduced to rigorously evaluate our ToL agent on the ScreenPR task, where descriptions are required based on user-indicated points on screenshots. It covers a diverse domain of GUIs. Each screenshot is annotated with about 2 target points and their corresponding local regions on the screenshot.

Main Result

cycle_eval
res

Main results. We evaluate our ToL agent against three other baselines with human evaluation and cycle consistency evaluation (shown on the left) of our ScreenPR benchmark. Additionally, we compare the generated content descriptions with human verified content descriptions using language similarity scores. The results show that the ToL agent achieves the best performance.

Demo of ToL Agent

You will play as the user and provide input by click the image. The output from our ToL agent will show in real-time. Click start button below to launch the demo. (When clicking leads to no response, refresh and retry please.)

×
  • Click anywhere on the screen, and find out the description outpu below. You will also find red box showing the predicted local region and blue box shoing the predicted local region.
  • Click coordinates:
    will appear here after clicking.
  • Output description:
  • will appear here after clicking.

BibTeX


      @misc{fan2024readpointedlayoutawaregui,
      title={Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding}, 
      author={Yue Fan and Lei Ding and Ching-Chen Kuo and Shan Jiang and Yang Zhao and Xinze Guan and Jie Yang and Yi Zhang and Xin Eric Wang},
      year={2024},
      eprint={2406.19263},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.19263}, 
}