Surgical precision with AST-based code editing in Kiro

TL;DR: Over the past few weeks, we've been testing a new AST-based code navigation and editing engine that reduces token usage by 20% on our SWE-PolyBench, a benchmark containing feature request examples, the most frequent type of query in Kiro. It also enables precise, resilient, production-grade code transformations. No more fragile regex or full-file dumps!


Every developer using AI coding assistants has experienced it: the agent reads through thousands of lines to find a single function, then fails to update it because of a minor formatting difference. The current approach reads entire files and matches exact strings, but burns through tokens and breaks easily. We built something better.

Today, we're introducing an AST-based code navigation and editing system that gives Kiro surgical precision when working with your codebase. Instead of probing arbitrary line ranges and matching brittle strings, this tool targets code by structure—functions, classes, imports—and applies typed operations with semantic understanding.

The problem with text-based code tools

Until now, Kiro IDE relied on two text-based tools: readFile for examining code and strReplace for making edits. While straightforward, this approach creates two fundamental issues:

1. High token and latency costs

When looking for a specific function, the agent must read large portions of files, often entire files, to find what it needs. This means thousands of tokens spent on context that isn't relevant to the task.

2. Brittle string matching

Making edits requires exact string matches. A single difference in whitespace, formatting, or comments between the agent's expectation and the actual code results in failed edits or unintended multi-matches. The agent then needs additional iterations to correct the mistake.

These issues compound: the agent makes a mistake, reads the file again to diagnose, attempts another edit, and the cycle continues. Each iteration consumes more tokens and adds latency.

How the AST-based engine works

The engine replaces text-based operations with AST parsing. Instead of treating code as strings, it understands code as structured entities: functions, classes, methods, imports, and their relationships.

Code read: targeted information extraction

Rather than dumping entire file contents, the engine's code read tool returns only what you need:

  • Signatures: Function and class definitions without implementation details

  • Structure: High-level code organization and relationships

  • Search results: Specific definitions matching your criteria

Compare these two approaches for examining a Java class:

Traditional approach (1,309 tokens):

Loading code example...

AST-based approach (545 tokens):

Loading code example...

In this example, the engine provides 58% fewer tokens while delivering the essential structural information needed for navigation and decision-making.

Code write: semantic editing

The engine's code write tool uses selectors to target code elements precisely:

  • ClassName.methodName to target a specific method

  • function:functionName to target a function

  • field:fieldName to target a class field

  • end to append to a file

It supports four typed operations:

  • insert_node: Add new code at specific locations

  • replace_node: Replace entire functions or classes

  • delete_node: Remove code elements cleanly

  • replace_in_node: Make surgical edits within a code block

Here's a real example from our benchmarks, for adding a new function to a TypeScript file:

Traditional approach (361 tokens):

Loading code example...

AST-based approach (96 tokens):

Loading code example...

In this example, the engine delivers 73% fewer tokens by avoiding the need to include surrounding context for exact string matching.

Real-world impact

We evaluated our AST-based engine against traditional tools on two benchmarks:

PolyBench50 results

Running AST read and write operations on PolyBench50 (a subset of SWE-PolyBench) showed consistent improvements:

Metric

Traditional

AST-based engine

Improvement

LLM calls per task

40.88

26.86

34.30%

Output tokens

270,957

189,806

29.95%

Input tokens

680,684

541,346

20.47%

Feature request demonstration

We tested both approaches on a realistic feature request: "Add third-party integrations to AWS Resource Explorer" (Slack/Teams notifications, Jira tickets, SIEM exports, AWS Config integration).

Metric

Traditional

AST-based engine

Improvement

Running time

9m 20s

4m 44s

49.30%

LLM calls

29

22

24.10%

Input tokens

1,350

1,192

11.70%

Output tokens

761

654

14%

Tool errors

2

0

-

Why AST matters for production development

The benefits of AST-based code operations go beyond token efficiency:

Resilience: Formatting changes don't break edits. Whether you use 2 spaces or 4, tabs or spaces, the structural edit succeeds.

Precision: Target exactly what you want to change without worrying about similar code elsewhere in the file.

Maintainability: Operations that work today will work tomorrow, even as the surrounding code evolves.

Understanding: AST parsing provides the agent with genuine comprehension of code structure, enabling smarter decisions about where and how to make changes.

This matters especially for feature requests, the most frequent query type in Kiro, accounting for 45% of Vibe mode traffic and 67.6% of Spec mode traffic. These requests often involve multiple file modifications across a codebase, where the compound effect of brittle string matching creates significant friction. We're excited by these early results, and now this is available in production.