AI sees the screen, decides, and acts.
Screen capture built for that. One CLI.
From Claude Code, Cursor, or a shell script.
Capture the full screen, let AI locate the part that matters, zoom in by coordinates, and iterate with higher precision. No more pasting screenshot after screenshot.
Capture the error screen from the CLI. Run OCR and the stack trace comes through accurately.
Burst mode takes 5 or 10 frames from a single command, ready to hand to AI together.
Auto-detect and mask emails, API keys, and personal data. Internal screens stay safe to hand to AI.
Record the screen and extract frames. For conveying animations or transitions.
Crop the region, run OCR, save it. Layout detection and coordinate targeting are handled by AI.
gaze watch observes the screen and, on change, pulls the diff via OCR. Good for deploy checks and UI drift.
Capture the full screen, locate the right region, zoom by coordinates, read it as text, move on.
The "see and decide" loop humans used to run — now AI runs it through the CLI.
Capture, OCR, resize — all local.
Mask emails and API keys before anything goes to an LLM. Admins can block sending outright.
No image data leaves the device, so the DPA scope is narrow. Most checklist items come back "not applicable".
One-time license updates after year one: $15/year. Skip renewal and your current version keeps working.
Admins configure masking rules and outbound controls.
From 3 seats. Annual: $120/user/year.
SSO, audit logs, dedicated support. For larger rollouts.
The built-in tool just saves an image. Gaze adds OCR, burst capture, and sensitive-data masking, callable from the CLI in one command.
Nowhere. Processing is local. Telemetry is off by default.
It uses the macOS Vision framework. Japanese is supported.
Your last version keeps working. Renewals are optional at $15/year.
Yes. brew install gaze gives you the CLI; launching the app gives you the GUI. Same binary.
Yes. OCR, capture, and masking all run locally. No network required.