Repository Intelligence: The Future of AI Coding (2026)
Repository Intelligence is the next evolution beyond autocomplete. Learn how Claude Code, Cursor, and GitHub are using codebase-level AI to understand history, dependencies, and architecture. Full technical guide.
Repository Intelligence: The Future of AI Coding (2026)
In January 2026, GitHub's Chief Product Officer confirmed a directional shift that has been building in the AI coding ecosystem for two years: the next frontier of AI-assisted development is not generating more code faster. It is understanding more code more deeply.
The distinction matters. Among developers who say AI degrades code quality, 44% blame missing context. Even among AI advocates, 53% still want better contextual understanding. The problem is not model capability at the token level. It is that most AI coding tools still reason about software the way a new contractor reads a single file — without knowing the architecture it lives inside, the decisions that shaped it, or the changes that preceded it.
Without depth, agents cannot see downstream impact or provide reliable verification. Context is what separates useful agents from noise generators.
Repository Intelligence is the capability of an AI coding system to reason about an entire codebase as a unified, relational, historically-aware structure — understanding not just file contents, but commit histories, dependency graphs, architectural patterns, and the causal relationships between changes. It operates at the repository level rather than the file or line level.
This article defines the concept precisely, explains how it works at the architecture level, and examines how Claude Code, Cursor, and GitHub are implementing it in production today.
What Is Repository Intelligence?
Repository Intelligence is a property of an AI coding system, not a product feature. A system exhibits repository intelligence when it can answer questions about a codebase that require reasoning across multiple layers simultaneously:
- Layer 1 — Structural: What files and modules exist, how they are organized, and what the dependency graph looks like.
- Layer 2 — Semantic: What the code does, what patterns it uses, what conventions it follows, and how components relate to each other by meaning rather than just by import.
- Layer 3 — Historical: What changed, when, why, and by whom — the commit graph as a source of architectural intent and technical debt signal.
- Layer 4 — Impact: Given a proposed change, what will it affect? Which downstream components depend on the modified function? Which tests will break? Which patterns will become inconsistent?
A system operating only at Layer 1 and 2 is a context-aware code generator. A system operating across all four layers — providing structural, semantic, historical, and impact reasoning — exhibits repository intelligence.
The Relational Graph Model
The key conceptual shift is treating code not as a collection of files but as a relational graph where:
- Nodes are functions, classes, modules, and components
- Edges are dependencies, imports, inheritance chains, and call graphs
- Node metadata includes commit history, authors, change frequency, and test coverage
- Edge metadata includes the strength and direction of coupling
When an AI system holds this graph in its reasoning context — not just the text of individual files — it can answer questions that file-level AI cannot: "If I change this function signature, what breaks?" "Where is authentication logic distributed across this codebase?" "Which modules have the highest change frequency and lowest test coverage?"
These are not autocomplete questions. They are architectural questions. Repository intelligence is what enables an AI to answer them.
Why File-Level AI Is No Longer Sufficient
The previous generation of AI coding tools operated at the file and line level. The limitation is architectural, not a matter of model capability.
AI-assisted development will account for over half of new enterprise code by 2026, placing unprecedented pressure on review and validation practices. AI-generated code passes tests while breaking assumptions elsewhere. The root cause: file-level AI has no awareness of those assumptions, because the assumptions live in the relationships between files, not inside any single one.
The File-Level Limitation Matrix
How Repository Intelligence Works: Architecture Level
Repository intelligence is implemented through a stack of four architectural components. Understanding each layer is necessary to evaluate tools and make infrastructure decisions.
Component 1: Codebase Indexing and Chunking
The foundation is converting a codebase from a file tree into a structured, searchable index. This involves:
- Chunking: The codebase is split into semantically meaningful units — functions, classes, and methods — rather than arbitrary character windows. AST-based code chunking analyzes code in Abstract Syntax Trees to generate chunks that preserve structural boundaries. This achieves approximately 40% token reduction under conditions of equivalent retrieval quality compared to naive character-based chunking.
- Embedding generation: Each chunk is converted into a vector representation capturing its semantic meaning. Custom embedding models generate vector representations that capture the semantic meaning of code, enabling retrieval for user queries to be matched with semantically related code even when exact keywords do not overlap. This significantly improves retrieval quality for tasks such as code understanding, refactoring, and debugging.
- Incremental synchronization: Codebases change constantly. Rather than recomputing indexes for every file from scratch, the system detects differences and updates only the affected data. Most edits leave most chunks unchanged, so caching embeddings by chunk content ensures that unchanged code hits the cache and agent responses stay fast.
Component 2: Vector Database Storage and Retrieval
Embeddings are stored in a vector database optimized for high-dimensional similarity search. The server stores an obfuscated relative file path and the line range corresponding to each chunk alongside its embedding vector. This enables filtering of vector search results by file path while ensuring that, in privacy mode, no plaintext code is ever stored persistently on remote servers.
At query time, the user's natural language query is embedded using the same model, and a nearest-neighbor similarity search retrieves the most semantically relevant code chunks. This enables queries like "find all authentication handling" to return relevant results even when no file is named auth and no function is named authenticate.
Component 3: Commit Graph Reasoning
This is the layer that distinguishes repository intelligence from sophisticated RAG. The commit graph encodes:
- Which functions change together frequently (coupling signal)
- Which components are modified most often (instability signal)
- Which authors understand which modules (knowledge ownership signal)
- What decisions were made and why (PR descriptions, commit messages)
A system with commit graph access can answer: "This function has been modified 47 times in 6 months by 8 different developers. It has no test coverage. Any change here should be treated as high risk." That is not code generation. That is architectural judgment informed by history.
Component 4: Dependency Mapping and Impact Analysis
Given a change, a repository-intelligent system builds an analysis showing downstream impact: which endpoints are affected, what the function in question calls, what historical patterns apply, and what related tests exist.
The dependency map extends beyond direct imports to the full transitive call graph — enabling impact analysis that identifies components that will break even when they do not directly import the modified module.
Claude Code and Repository Intelligence
Claude Code implements repository intelligence through a combination of on-demand file reading, structured project context, and a large context window that accommodates cross-file reasoning at depth.
The On-Demand Reading Architecture
Instead of trying to cram everything into context upfront, Claude Code reads files on demand as it reasons through a task. Combined with Claude's 200K+ token context window, this means it can effectively see much more of the codebase during a complex task — it follows import chains, reads related test files, and checks configuration.
The practical implication: Cursor and Windsurf are comfortable up to roughly 30–50 files. Claude Code is comfortable up to 100+ files — which has massive practical implications when a refactoring task touches an authentication layer, API routes, middleware, database queries, and frontend components simultaneously.
CLAUDE.md as Persistent Architectural Context
CLAUDE.md is a markdown file added to the project root that Claude Code reads at the start of every session. It is used to set coding standards, architecture decisions, preferred libraries, and review checklists.
Claude: [Generates generic purple SaaS layout]
User: No, navy. Asymmetric. Spacing is off.
Claude: [Regenerates... styling breaks]
User: Why is Tailwind v4 mixing in?
Result: 40 mins of iteration & debugging.
Claude: [Reads DNA; applies navy, asymmetric, v3 syntax]
User: Perfect. Now the features.
Result: 2 mins to production-ready structure.
This is repository intelligence at the persistent context layer. The CLAUDE.md file encodes what would otherwise require inferring from code: the architectural decisions, the coding conventions, the module boundaries, and the review standards. It is the human-authored architectural summary that primes the AI's structural reasoning before any task begins.
Subagent Architecture for Large Codebases
Claude Code supports subagents for investigation. When broad codebase exploration is needed, a subagent can search in its own context window and report back a summary, keeping the main session clean. This addresses the context budget problem in large repositories — parallel agents explore different sections of the codebase simultaneously and synthesize findings into the main reasoning thread.
Practical Workflow Example: Issue-to-PR
- Developer pastes a GitHub issue: "Auth token refresh fails silently on mobile clients"
- Claude Code reads the entry point matching the issue domain
- It follows import chains from the auth module to the token refresh handler
- It reads the mobile API adapter that calls the token service
- It identifies the silent failure: refresh errors are caught and logged but never surfaced to callers
- It writes the fix across three files: error propagation in the token service, caller handling in the adapter, and a new test case
- It generates a structured commit message and draft PR description
- Anthropic's data teams report engineers onboard 80% faster when a solid
CLAUDE.mdis in place.
Cursor and Context-Aware Codebase Indexing
Cursor implements repository intelligence through a pre-built semantic index with real-time synchronization — a fundamentally different architectural approach from Claude Code's on-demand reading.
The Turbopuffer Vector Index
Cursor computes an embedding for the user's question or code context, sends it to Turbopuffer (Cursor's vector database) which performs a nearest-neighbor search to find code chunks semantically similar to the query, and then the local client reads these relevant code chunks from local files. Critically, actual code content remains on the machine and is retrieved locally.
Semantic search improved response accuracy by 12.5% on average, produced code changes that were more likely to be retained in codebases, and raised overall request satisfaction. Cursor builds a searchable index of the codebase when a project is opened. (Also read: Cursor AI)
The Merkle Tree Synchronization System
Cursor uses Merkle trees for synchronization. When a file changes, Cursor walks only the branches where hashes differ. In a workspace with 50,000 files, just filenames and SHA-256 hashes add up to roughly 3.2MB. Without the tree structure, that data moves on every update. With it, Cursor transfers only the changed branches.
New users inside an organization don't need to rebuild the index from scratch. The client computes a similarity hash from the Merkle tree — a single value summarizing all file content hashes — and the server searches for matching indexes from the same team. Since clones of the same codebase average 92% similarity across users within an organization, existing index infrastructure can be securely reused.
PR History as Repository Context
Cursor automatically indexes all merged PRs from a repository's history. PR search helps developers understand the codebase's evolution by making historical changes searchable and accessible through AI. This is the commit graph reasoning layer — enabling queries about why code looks the way it does, not just what it does.
Privacy Architecture
Cursor creates embeddings without storing filenames or source code. Filenames are obfuscated and code chunks are encrypted. When Agent searches the codebase, Cursor retrieves embeddings from the server and decrypts chunks on demand.
GitHub and the Enterprise Repository Intelligence Shift
GitHub is building repository intelligence as a platform capability rather than an editor feature — with significant architectural implications for enterprise teams.
GitHub Agentic Workflows
GitHub Agentic Workflows, launched in technical preview, introduce a way to automate complex, repetitive repository tasks using coding agents that understand context and intent. This enables workflows such as automatic issue triage and labeling, documentation updates, CI troubleshooting, test improvements, and reporting.
The architectural distinction: these agents hold context across the entire repository — correlating behavior across services and applying organization-wide rules to every pull request — rather than operating on individual PRs in isolation.
Continuous AI: Beyond Deterministic CI
GitHub Next has been exploring a pattern called Continuous AI: background agents that operate in the repository the way CI jobs do, but for tasks that require reasoning instead of rules. CI was designed for binary outcomes — a test passes or fails. Continuous AI handles work that requires interpretation, synthesis, and context.
"Any task that requires judgment goes beyond heuristics. Any time something can't be expressed as a rule or a flow chart is a place where AI becomes incredibly helpful." — Idan Gazit, head of GitHub Next
This creates a new category of automated engineering work: documentation accuracy validation, performance regression detection through behavioral simulation, and PR-level impact prediction — all running as background agents against the full repository context.
Enterprise Code Review at Scale
AI-assisted coding pushed PR volume up 29% year-over-year. For large GitHub estates with hundreds of developers and thousands of repositories, AI agents have become the verification layer that human-only review processes can no longer sustain. They provide the depth, consistency, and governance required to keep quality at the velocity at which engineering teams now operate.
Real Developer Workflows Enabled by Repository Intelligence
Workflow 1: Large-Scale Cross-Module Refactoring
Scenario: Migrating a monolith's authentication system from session tokens to JWTs across 47 files.
Without Repository Intelligence:
- Developer manually identifies files referencing the session token service
- Changes propagate inconsistently as edge cases are discovered mid-refactoring
- Tests break in unexpected modules with no clear root cause
- Review takes multiple cycles due to incomplete change surface
With Repository Intelligence:
- Agent builds a dependency map from the session token service outward
- Identifies all 47 direct and transitive consumers
- Generates a refactoring plan with module groupings by coupling strength
- Executes changes in dependency order (leaf nodes first, then dependents)
- Runs tests after each module group to catch propagation issues early
- Generates a structured PR with per-module rationale
Key enabler: Transitive dependency mapping. Without it, step 1 only finds direct imports — missing modules that consume session tokens through intermediate services.
Workflow 2: Regression-Aware PR Generation
Scenario: New engineer implements a feature that unintentionally modifies a shared utility function.
Repository Intelligence process:
- AI analyzes the proposed diff against the dependency graph
- Identifies that the modified utility is called by 12 other modules
- Surfaces 3 modules where the change creates behavioral inconsistency
- Adds regression tests for the affected call sites before PR is submitted
- PR description includes an automated impact summary: "Affects 12 consumers. 3 required behavioral updates. Regression tests added."
AI systems hold context across repositories, correlate behavior across services, and apply organization-wide rules to every pull request. This is a fundamental shift from static analysis or isolated LLM suggestions, which operate in short bursts and do not understand how a change fits into the broader architecture.
Workflow 3: Legacy Codebase Onboarding
Scenario: Engineer joining a team inheriting a 200K-line Rails monolith with sparse documentation.
Without Repository Intelligence: 2–3 weeks of manual exploration to build a working mental model of the system. With Repository Intelligence:
- Ask: "Explain the main request lifecycle from HTTP entry to database"
- Agent traces the path across router → middleware stack → controller → service layer → ORM
- Ask: "Which modules have the highest change frequency and lowest test coverage?"
- Agent builds a risk register from commit graph + coverage data
- Ask: "What was the architectural rationale for the service layer separation?"
- Agent reads relevant PR descriptions and commit messages that introduced it
Anthropic's data teams report engineers onboard 80% faster when a solid CLAUDE.md is in place — and Claude Code's codebase analysis accurately maps component relationships in seconds on codebases with 50K+ lines.
Workflow 4: Pre-Merge Impact Analysis
Scenario: Proposed change to a core data transformation function.
Repository Intelligence process:
- Agent identifies all downstream consumers in the call graph
- Queries commit history: how many times has this function changed in the past 6 months? 47 times. By how many different engineers? 8.
- Checks test coverage: none for the direct function.
- Risk assessment generated: High — high change frequency, many consumers, no tests
- Required actions before merge: add unit tests, notify owning team of downstream impact
- PR blocked from auto-merge pending review by codebase owner
This workflow does not require AI to write any code. It requires AI to understand the codebase well enough to flag the risk.
Challenges and Limitations
Repository intelligence is not universally applicable. The following constraints are architectural, not incidental.
Context Window Limits in Large Monorepos Long context support of 1M tokens is currently limited to Claude Code users on Max 20x plans and is not generally available for all users, including those accessing Claude via the API. For standard deployments, Claude Code's effective code context is 150K+ tokens, read on demand — comfortable up to roughly 100+ files. Repositories with 500K+ files require subagent parallelism or vector index pre-filtering to remain tractable.
Indexing Latency for Very Large Repositories Large monorepos with 100K+ files can take 30–60 seconds for initial codebase indexing on first load. Semantic search is not available until at least 80% of indexing is finished. For teams working on large enterprise repositories, first-run delays must be planned into the development environment setup workflow.
Misleading Commit History Commit graph reasoning depends on commit message quality and commit atomicity. Repositories with large squash merges, unclear commit messages, or bulk-committed generated code produce noisy historical signals. An AI reading commit history in these repositories may infer false architectural intent from the noise — treating a mass file rename as an architectural decision rather than a maintenance task.
Embedding Security Surface Academic research has shown that reversing embeddings is possible in some cases. While current attacks typically rely on having access to the embedding model and working with short strings, there is a potential risk that an adversary who gains access to a vector database could extract information about indexed codebases from stored embeddings. Enterprise teams should evaluate this risk against their threat model before indexing sensitive codebases on third-party infrastructure.
False Pattern Recognition
Repository intelligence uses historical patterns to inform current decisions. In codebases undergoing deliberate architectural migration — moving from one pattern to another — historical analysis will surface the old pattern as the predominant convention and may suggest new code be written to the legacy standard. Explicit architectural guardrails in CLAUDE.md or equivalent project context files are required to override pattern inference during migration periods.
Cost Scaling Every file Claude reads counts toward the context window and gets billed on every subsequent API call. Conversation history compounds — turn after turn, the session re-sends everything. For heavy repository intelligence workloads — multi-hour autonomous sessions, large-scale refactoring jobs, or continuous background analysis — token costs scale with depth of repository engagement. Teams should implement vector pre-filtering, subagent isolation, and session scoping to manage cost at scale.
Professional Workflow System: Implementing Repository Intelligence
The following system is designed for engineering teams integrating repository intelligence into their development workflow. It progresses through three implementation stages.
Stage 1: Foundation Layer (Week 1–2)
Objective: Establish the codebase indexing and persistent context infrastructure.
Step 1: CLAUDE.md / Project Context File Creation Before any AI interaction with the codebase, create a structured project context file containing:
# [Project Name] — AI Context
## Architecture
- Primary pattern: [MVC / Hexagonal / Event-driven]
- Service boundaries: [list of major modules and their responsibilities]
- Entry points: [primary request flows]
## Conventions
- Language and version: [e.g., TypeScript 5.3, strict mode]
- Naming patterns: [camelCase functions, PascalCase classes]
- Error handling: [pattern description]
- Testing framework: [Jest / Vitest / PyTest]
## Current Migration State
- [Any in-progress architectural changes]
- [Patterns being deprecated]
- [Patterns being adopted]
## Review Requirements
- [ ] Dependency impact assessed for changes touching [high-risk modules]
- [ ] Test coverage maintained for modified functions
- [ ] No direct database calls outside the repository layer
Step 2: Codebase Index Initialization
- For Cursor: Enable codebase indexing in Settings > Indexing & Docs. Configure
.cursorignoreto exclude build artifacts, vendored code, and large data files. Verify indexing completeness at 100% before beginning agent sessions. - For Claude Code: Run
/initon the project to generate the initialCLAUDE.mdscaffold. Useclaude --dangerously-skip-permissionsin isolated development environments for unrestricted first-pass exploration. - For both: Exclude files tracked in Git but irrelevant to AI reasoning: large binary files, compiled assets, test fixtures.
Step 3: Baseline Repository Map Before beginning task-specific work, generate a one-time architectural map:
Task: Analyze this repository and produce:
- Module dependency graph (text representation)
- High-risk areas: functions with high change frequency and low test coverage
- Architecture summary: primary patterns, entry points, service boundaries
- Onboarding guide: the five most important things a new engineer must understand
Stage 2: Task Integration Layer (Week 3–4)
Objective: Build repository intelligence into the standard development workflow at specific task types.
Pre-change impact analysis (mandatory for high-risk modules):
Before implementing [TASK]:
- Map all consumers of [function/module being changed]
- Identify which consumers will require updates
- Assess test coverage for the change surface
- Flag any changes that modify behavior observable by external callers Report findings before writing any code.
PR generation workflow:
Task completed: [description] Generate a pull request that includes:
- Summary: what changed and why
- Impact analysis: which components are affected
- Test coverage: what tests were added/modified
- Risk assessment: any edge cases or downstream concerns
- Reviewer guidance: what to focus on in review
Legacy code onboarding workflow:
I am unfamiliar with [module/system].
- Explain its purpose and responsibility within the architecture
- Trace the primary request/data flow through it
- Identify the three highest-risk areas (high coupling, low test coverage)
- Explain any architectural decisions that are non-obvious from the code alone
Stage 3: Governance and Scale Layer (Month 2+)
Objective: Make repository intelligence a team standard rather than an individual practice.
- Shared CLAUDE.md management: Version-control the project context file. Treat it as living documentation — update it when architectural decisions change, when migrations complete, or when new conventions are adopted. Assign ownership to the engineering lead.
- CI integration: Add repository-level AI analysis to the CI pipeline for high-risk module changes. Use GitHub Agentic Workflows or equivalent to trigger impact analysis automatically on PRs touching defined high-risk paths.
- Subagent parallelism for large codebases: For repositories exceeding 100 files of relevant context, decompose analysis tasks into parallel subagent sessions — one per major module — and synthesize results in a coordination session.
- Index hygiene: Review and update
.cursorignore/.gitignoreexclusions quarterly. Build artifact accumulation is the primary cause of index quality degradation over time.
Common Mistakes and Misapplications
- Treating repository intelligence as a replacement for code review. Repository intelligence augments review quality — it does not replace human judgment on architectural decisions, business logic correctness, or security-critical code. The agent surfaces what to look at. The engineer determines what it means.
- Using CLAUDE.md as a dumping ground. A bloated
CLAUDE.mdwastes context — Claude processes the whole file every session. Keep it under 150 lines. ACLAUDE.mdthat contains every convention, every historical decision, and every edge case produces worse results than a tightly scoped one that covers only what matters most. - Indexing the entire repository without filtering. Including build artifacts, vendored dependencies, and large data files in the codebase index increases indexing time, degrades search relevance, and increases cost. Semantic search returns irrelevant results when the index includes too much noise.
- Assuming commit history is reliable signal. Repositories with poor commit discipline — vague messages, large squash merges, mixed unrelated changes in single commits — produce misleading historical context.
- Running unbounded sessions on large codebases. Without context scoping, a long agentic session on a large codebase will consume context budget exploring files that are not relevant to the task. Use explicit scope directives: "Focus on the authentication module only. Do not read files outside
/src/auth/unless a specific import requires it." - Treating the first-pass index as production-ready. Initial codebase indexing on very large repositories may be incomplete or may include stale cached embeddings.
- Delegating security-sensitive analysis without review. AI surface-level analysis is a triage tool, not a security audit.
Strategic Implications: 2026–2028
Multi-Agent Repository Orchestration
The trajectory points toward teams of specialized agents working concurrently on different subsystems of the same repository. Each agent holds deep context for its assigned module. A coordination agent synthesizes cross-module impacts and manages inter-agent dependencies. GitHub Agentic Workflows already demonstrate this with parallel agents handling issue triage, documentation, CI troubleshooting, and test improvement simultaneously.
The architectural implication for teams: the human engineer's role shifts from individual contributor to task specification and output review — directing agent teams rather than writing the implementation.
Continuous Repository Intelligence Pipelines
Continuous AI operates as background agents in the repository for tasks that require reasoning. Software engineering has always included work that is repetitive, necessary, and historically difficult to automate — not because it lacks value, but because it resists deterministic rules.
The near-term roadmap: code quality regression detection running on every commit, documentation synchronization as a background process, test coverage maintenance as a continuous agent, and architectural drift detection that alerts when new code diverges from established patterns.
Self-Maintaining Repositories
The endpoint of this trajectory is a repository with defined architectural invariants — encoded in project context files and CI agents — that are automatically enforced, maintained, and documented as the codebase evolves. Not autonomous code generation without oversight, but codebases where the structural rules are machine-readable and machine-enforced with human approval gates at defined thresholds.
Who Should Care and Why
- Solo Developer: The leverage is in onboarding and refactoring. Repository intelligence collapses the time needed to understand an unfamiliar codebase from days to hours. (Also read: The AI Software Engineer Guide 2026)
- Startup Engineering Team (2–10 engineers): The leverage is in PR quality and architectural coherence. As a team's codebase grows from prototype to production, maintaining coherence across modules becomes the primary quality bottleneck.
- Enterprise CTO: The leverage is in risk management and velocity. AI-assisted coding has pushed PR volume up 29% year-over-year. For large GitHub estates, AI agents have become the verification layer that human-only review processes can no longer sustain.
- Open-Source Maintainer: The leverage is in contributor onboarding and issue triage.
Strategic Conclusion
Repository intelligence is not an incremental improvement to AI code generation. It is a categorical shift in what AI systems know about software.
File-level AI knows what code says. Repository intelligence knows what code means — how it got there, what it depends on, what will break if it changes, and where it fits in the architecture it inhabits. That knowledge is what enables AI to participate in the decisions that matter most in software engineering: not the decisions about which syntax to use, but the decisions about what to change, whether to change it, and what the consequences will be.
The implementations are at different stages. Cursor's semantic search has improved response accuracy by 12.5% and produces code changes more likely to be retained in codebases. Claude Code's on-demand architecture handles 100+ file contexts with architectural reasoning that smaller-context tools cannot match. GitHub's Continuous AI pattern runs reasoning agents as background processes across entire repository histories.
What is clear: the developers and organizations that build repository intelligence into their workflows now — establishing project context files, configuring semantic indexes, implementing continuous AI pipelines — will have compounding advantages as the capabilities mature. The architectural investment is low. The leverage is high.
Repository intelligence is not the end state of AI-assisted development. It is the infrastructure that makes the end state possible.
Frequently Asked Questions
Common questions about this topic
Don't Miss the Next Breakthrough
Get weekly AI news, tool reviews, and prompts delivered to your inbox.
