--- license: apache-2.0 task_categories: - visual-question-answering - text-generation language: - en - zh tags: - android - gui grounding - gui agent - english app - chinese app - long-horizon-planning pretty_name: AndroidLens size_categories: - 1K ๐Ÿ“„ **Paper**: [AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents](http://arxiv.org/abs/2512.21302) > ๐Ÿ’พ **GitHub**: [https://github.com/alibaba/AndroidLens](https://github.com/alibaba/AndroidLens) > ๐Ÿค— **Hugging Face**: [yuecao0119/AndroidLens](https://huggingface.co/datasets/yuecao0119/AndroidLens) --- ## ๐Ÿ—‚๏ธ Dataset Structure Your data is organized as: ``` test/ โ”œโ”€ en/ # English tasks โ”‚ โ””โ”€ / โ”‚ โ”œโ”€ .json # Full episode trajectory (list of steps) โ”‚ โ”œโ”€ _0.png # Screenshot at step 0 โ”‚ โ”œโ”€ _1.png โ”‚ โ””โ”€ ... โ””โ”€ zh/ # Chinese tasks โ””โ”€ / โ”œโ”€ .json โ”œโ”€ _0.png โ””โ”€ ... ``` Each `.json` file contains a **list of step objects**, with one object per interaction step. --- ### ๐Ÿท๏ธ Task Category Codes (`types`) The `types` field uses a hierarchical two-digit code system to classify task complexity and structure. These categories align with AndroidLensโ€™s taxonomy of **Multi-goal (1-X)**, **Multi-constraint (2-X)**, and **Domain-specific (3-X)** tasks, enabling fine-grained analysis of agent performance across different challenge dimensions. | Code | Category | Description | |------|----------|-------------| | **1-1** | Single-app Unrelated Tasks | Multiple independent subtasks within **one app**, with no logical dependency. | | **1-2** | Single-app Related Tasks | Multiple **dependent** subtasks within **one app** (e.g., โ€œSearch product โ†’ add to cart โ†’ checkoutโ€) | | **1-3** | Cross-app Unrelated Tasks | Independent actions across **multiple apps** (e.g., โ€œSend message on WeChat, then play music on QQ Musicโ€) | | **1-4** | Cross-app Related Tasks | **Interdependent** operations across **multiple apps** (e.g., copy link from Chrome โ†’ paste in Drive โ†’ upload) | | **2-1** | Operation-Level Constraints | Tasks requiring precise **widget-level control**, such as exact text input, time/date pickers, or multi-condition filtering | | **2-2** | Page-Level Constraints | Tasks with navigation constraints like specific tab/category selection, sort/filter order, or view state | | **3-1** | Batch Operation Tasks | Repeated actions on multiple items (e.g., โ€œEmpty shopping carts on Taobao, JD, and Pinduoduoโ€) | | **3-2** | Combine with VLM Capabilities | Tasks that **leverage the agentโ€™s built-in multimodal abilities**, such as translation, comparison, summarization, or OCR-to-action | --- ## ๐Ÿ“‘ Step-level Data Format Each step in the JSON list includes: | Field | Type | Description | |------|------|-------------| | `episode_id` | `str` | Unique task ID (UUID) | | `language` | `str` | `"en"` or `"zh"` | | `app` | `List[str]` | Sequence of involved apps (e.g., `["Google Chrome", "Google Drive"]`) | | `episode_length` | `int` | Total steps in the full trajectory | | `step_id` | `int` | Current step index (0-based) | | `instruction` | `str` | High-level user goal (same for all steps in the episode) | | `image_path` | `str` | Relative path to screenshot (e.g., `en/.../0.png`) | | `image_width`, `image_height` | `int` | Original resolution of the screenshot | | `result_action_type` | `List[int]` | Action code (`[4]` = Click; see note below) | | `result_touch_yx` | `List[str]` | **Normalized** touch coordinates as string: `"[y, x]"` in range `[0, 1]` | | `result_lift_yx` | `List[str]` | End point for swipe (same as `touch` for tap) | | `result_action_text` | `List[str]` | Text to input (empty if none) | | `duration` | `List[null/float]` | Action hold time (null for tap) | | `low_instruction` | `str` | Step-specific guidance (for low-level evaluation) | | `milestone` | `dict` | **Nested sub-goal** info (see below) | | `types` | `List[str]` | Task category code (e.g., `"1-2"`); see full mapping below. | > ๐Ÿ” **Coordinate Note**: > - `result_touch_yx` uses **relative coordinates** in `[0, 1]`, with format `"[y, x]"` (note: *y first*). > - To convert to absolute pixel: > ```python > y_abs = float(y_rel) * image_height > x_abs = float(x_rel) * image_width > ``` --- ## ๐ŸŽฏ Milestone Format The `milestone` field enables **fine-grained progress evaluation**: ```json { "sub-target": "Open Google Chrome and search for 'panda'", "idx": 1, "bbox": [0.023, 0.121, 0.976, 0.189], // [x1, y1, x2, y2] in normalized coords "text": "panda", "state": ["selected"] } ``` - `idx`: milestone index (ordered) - `bbox`: bounding box of key UI element (normalized, xy format) - `text` / `state`: expected content or widget state Milestones support **ordered** and **unordered** sub-goals for complex tasks. --- ## ๐Ÿ“Š Action Type Mapping Although the original [AgentCPM-GUI](https://github.com/OpenBMB/AgentCPM-GUI) defines actions via names, your data uses numeric codes in `result_action_type`. Based on AndroidLens annotation practice, the common mapping is: | Code | Action | Required Fields | |------|------------------|-------------------------------------| | 1 | Wait | `duration` | | 3 | Type | `result_action_text` | | 4 | Click | `result_touch_yx` | | 4 | LongPress | `result_touch_yx`, `duration` | | 4 | Swipe | `result_touch_yx`, `result_lift_yx` | | 5 | PressBack | โ€” | | 6 | PressHome | โ€” | | 10 | Terminate | โ€” | > Confirm exact mapping from your annotation code if needed. AndroidLens uses ADB-based actions with explicit start/end for swipe. --- ## ๐Ÿ“œ License AndroidLens is released under the **Apache-2.0 License**. Screenshots are derived from real app usage for research purposes only. Comply with app store policies and local regulations. --- ## โœ๏ธ Citation If this work is helpful for your research, please consider citing the following BibTeX entry. ```bibtex @article{cao2025androidlens, title={AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents}, author={Yue Cao and Yingyao Wang and Pi Bu and Jingxuan Xing and Wei Jiang and Zekun Zhu and Junpeng Ma and Sashuai Zhou and Tong Lu and Jun Song and Yu Cheng and Yuning Jiang and Bo Zheng}, year={2025}, journal={arXiv preprint arXiv:2512.21302}, } ```