Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging

Project Page | GitHub Repository | Paper

GUI-Drag-7B is a multimodal model designed for GUI grounding, with a specific focus on text dragging interactions. While traditional models focus on clicking, GUI-Drag enables autonomous agents to select and manipulate textual content through dragging actions. The model is trained based on Jedi models via an efficient continual training strategy, which enhances text dragging performance while preserving original click-based capabilities.

Quick Demo

Below is the code of a quick demo (demo.png can be found at here). To use the model, you can start a vLLM server:

vllm serve osunlp/GUI-Drag-7B \
--host 0.0.0.0 \
--port 8000 \
--max-model-len 16384 \
--tensor-parallel-size 2
# pip install openai pillow transformers
import base64
import json
import re
from pathlib import Path
import io
from openai import OpenAI
from PIL import Image, ImageDraw
from transformers.models.qwen2_vl.image_processing_qwen2_vl_fast import smart_resize as qwen_smart_resize

MODEL_ID = "osunlp/GUI-Drag-7B"             
BASE_URL = "http://localhost:8000/v1"  

FN_CALL_TEMPLATE = """You are a helpful assistant.
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{"type": "function", "function": {{"name": "computer_use", "description": "Use a mouse and keyboard to interact with a computer, and take screenshots.
* This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.
* Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try wait and taking another screenshot.
* The screen's resolution is {width}x{height}.
* Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.
* If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.
* Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.", "parameters": {{"properties": {{"action": {{"description": "The action to perform. The available actions are:
* `key`: Performs key down presses on the arguments passed in order, then performs key releases in reverse order.
* `type`: Type a string of text on the keyboard.
* `mouse_move`: Move the cursor to a specified (x, y) pixel coordinate on the screen.
* `left_click`: Click the left mouse button.
* `left_click_drag`: Click and drag the cursor to a specified (x, y) pixel coordinate on the screen.
* `right_click`: Click the right mouse button.
* `middle_click`: Click the middle mouse button.
* `double_click`: Double-click the left mouse button.
* `scroll`: Performs a scroll of the mouse scroll wheel.
* `wait`: Wait specified seconds for the change to happen.
* `terminate`: Terminate the current task and report its completion status.", "enum": ["key", "type", "mouse_move", "left_click", "left_click_drag", "right_click", "middle_click", "double_click", "scroll", "wait", "terminate"], "type": "string"}}, "keys": {{"description": "Required only by `action=key`.", "type": "array"}}, "text": {{"description": "Required only by `action=type`.", "type": "string"}}, "coordinate": {{"description": "(x, y): The x (pixels from the left edge) and y (pixels from the top edge) coordinates to move the mouse to. Required only by `action=mouse_move`, `action=left_click_drag`, `action=left_click`, `action=right_click`, `action=double_click`.", "type": "array"}}, "pixels": {{"description": "The amount of scrolling to perform. Positive values scroll up, negative values scroll down. Required only by `action=scroll`.", "type": "number"}}, "time": {{"description": "The seconds to wait. Required only by `action=wait`.", "type": "number"}}, "status": {{"description": "The status of the task. Required only by `action=terminate`.", "type": "string", "enum": ["success", "failure"]}}}}, "required": ["action"], "type": "object"}}}}}}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{{"name": <function-name>, "arguments": <args-json-object>}}
</tool_call>
"""

IMAGE_PATH = Path("demo.png")
INSTRUCTION = "Drag to select the last sentence."

def encode_image(image: Image) -> str:
    """Encode PIL image to base64 string"""
    output_buffer = io.BytesIO()
    image.save(output_buffer, format="PNG")
    byte_data = output_buffer.getvalue()
    base64_str = base64.b64encode(byte_data).decode("utf-8")
    return base64_str

def resize_coordinates(coord, size_pred, size_to_be_mapped):
    return (
        round(coord[0] * size_to_be_mapped[0] / size_pred[0]),
        round(coord[1] * size_to_be_mapped[1] / size_pred[1]),
    )

def process_simple_drag_response(parsed):
    if len(parsed) < 2:
        return None
    first = json.loads(parsed[0])
    second = json.loads(parsed[1])
    if first["arguments"]["action"] not in ("mouse_move", "left_click"):
        return None
    if second["arguments"]["action"] != "left_click_drag":
        return None
    return first["arguments"].get("coordinate"), second["arguments"].get("coordinate")

def draw_drag(image: Image.Image, start, end, output_path: Path):
    draw = ImageDraw.Draw(image)
    draw.ellipse((start[0]-10, start[1]-10, start[0]+10, start[1]+10), outline="lime", width=3)
    draw.ellipse((end[0]-10, end[1]-10, end[0]+10, end[1]+10), outline="red", width=3)
    draw.line((*start, *end), fill="yellow", width=4)
    image.save(output_path)

def main():
    image = Image.open(IMAGE_PATH)
    resized_h, resized_w = qwen_smart_resize(
        image.height,
        image.width,
        max_pixels=2116800,
        min_pixels=12544,
    )

    messages = [
        {
            "role": "system",
            "content": [{"type": "text", "text": FN_CALL_TEMPLATE.format(width=resized_w, height=resized_h)}],
        },
        {
            "role": "user",
            "content": [
                {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{encode_image(image)}"}},
                {"type": "text", "text": INSTRUCTION},
            ],
        },
    ]

    client = OpenAI(base_url=BASE_URL, api_key="EMPTY")
    resp = client.chat.completions.create(
        model=MODEL_ID,
        messages=messages,
        temperature=0.1,
        max_tokens=1024,
    )

    text = resp.choices[0].message.content
    parsed = re.findall(r"<tool_call>\s*(\{.*?\})\s*</tool_call>", text, flags=re.DOTALL)
    drag = process_simple_drag_response(parsed)
    if not drag:
        print("No drag action detected.")
        return

    start_resized, end_resized = drag
    start = resize_coordinates(start_resized, (resized_w, resized_h), image.size)
    end = resize_coordinates(end_resized, (resized_w, resized_h), image.size)

    print("Predicted drag:", start, "→", end)
    draw_drag(image.copy(), start, end, IMAGE_PATH.with_name("GUI-Drag-7B_demo.png"))

if __name__ == "__main__":
    main()

Citation

If you find this work useful, please consider citing:

@article{cheng2025beyond,
  title={Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging},
  author={Cheng, Kanzhi and Wu, Zhiyong and Wu, Zhenyu and Sun, Qiushi and Liang, Paul Pu and Qiao, Yu and Zhang, Ming and Luo, Xiao and others},
  journal={arXiv preprint arXiv:2601.06031},
  year={2025}
}
Downloads last month
5
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for osunlp/GUI-Drag-7B

Quantizations
1 model

Collection including osunlp/GUI-Drag-7B

Paper for osunlp/GUI-Drag-7B