Popular searches
//

The Accessible Domain: Knowledge Engineering for AI-Assisted Development

11.5.2026 | 10 minutes reading time

The Old Promise

In the late 1970s, Stanford computer scientist Edward Feigenbaum coined the term "Knowledge Engineering". He described it as the process of extracting expert knowledge, structuring it, and making it usable within a software system. Central to this was a new role: the "Knowledge Engineer" as a mediator between domain expert and system. The promise: if expert knowledge could be translated into rules, expertise could be replicated at will.

In the 1980s, companies worldwide invested billions in rule-based expert systems. Within a decade, the approach failed due to three problems:

  • Knowledge Acquisition Bottleneck: Domain experts often could not fully articulate their knowledge. Much of it was implicit and could not be captured in if-then rules.
  • Scaling Problem: Knowledge bases were coded by hand, rule by rule. Complexity grew exponentially: every new rule could interact with every existing one. Every new domain required starting from scratch.
  • Brittleness: The systems worked within their narrow rule sets but failed when faced with unforeseen cases.

Knowledge Engineering disappeared from the discourse.

40 years later, we faced a similar problem in a concrete migration project. Domain knowledge had to be extracted from the minds of experts, structured, and prepared so that the development team and AI agents could work with it.

The difference from then: we no longer translate domain knowledge into formal rules. The rule-based systems of the 80s operated on explicitly coded knowledge. This was precise in principle, but the complexity of the rule sets became unmanageable for humans. LLMs work differently. They abstract from natural language without "having" the knowledge in a formal sense. This makes them less precise, but radically better at handling volume and complexity. This is exactly why our approach works: the AI helps us structure and make domain knowledge accessible. Humans still verify the correctness.

What emerged from this process is what we call the "accessible domain".

The Starting Point

An insurance company has been running a tool on Excel, Visual Basic, and MS Access for almost 20 years. It pulls data from the ERP system, calculates premium distributions and claims settlement across multiple reinsurers, and pushes journal entries back into the ERP system. Millions in transactions, every year.

Very few employees understand this system from a business perspective. The domain is so complex that changes to the system were only possible with significant risk.

No documentation, no automated tests. Everything lives in people's heads. At the same time, the application needed to be extended, and Excel, Visual Basic, and MS Access is not a stack you want to run critical financial processes on permanently.

Our mandate: migrate the tool to a modern stack. Without losing the domain knowledge.

How an Accessible Domain Is Created

Phase 1: Interrogating the Code

Before we talked to people, we talked to the code. We had an AI analyze the complete VBA source code, with the task of explaining the system's business logic and formulating open questions where the code alone was insufficient.

From multiple analysis sessions, we developed both an initial understanding of the system's business logic and a consolidated question catalog. Each session examined the code from a different perspective and produced different results. During synthesis, redundancies were cleaned up and the questions were structured by business area.

Phase 2: Interviews with Domain Experts

We went into the interviews armed with the question catalog. Over eight hours, spread across multiple sessions. We had the conversations recorded and moderated them with the goal of subsequently processing the transcripts with the AI. This produced over 60,000 words of raw material.

This goal determined how we conducted the interviews. We call it "AI-friendly interviewing".

Setting chapter markers. We explicitly announced when we changed topics: "We're now talking about Process 2, contract management." This allowed the AI to cleanly segment the conversation and assign statements to the correct business areas.

Speaking for the AI. When the domain expert said something important, we repeated it in our own words. We summarized intermediate results and reformulated statements in a more structured way. The redundancy made the subsequent synthesis more robust.

Using the question catalog flexibly. We first asked the domain experts to speak freely. As they talked, we recognized which questions from the catalog fit and interjected them. This kept the interview a natural conversation. But one that systematically filled in the open questions.

Phase 3: Synthesis into the Accessible Domain

From code analysis and interviews, the AI synthesized business descriptions, technical specifications, and process diagrams, supplemented by a glossary.

Unlike the Knowledge Engineering of the 1980s, knowledge bases are no longer coded by hand. Instead, the AI synthesizes structured domain knowledge from natural language sources. The result is not traditional documentation that becomes outdated after the project ends. It is a knowledge layer that can be consumed by both the team and AI agents. This is what we call the "accessible domain": domain knowledge in a form that is human-readable and machine-processable.

The Domain Knowledge Agent

An agent sits on top of the accessible domain. 80 lines of Markdown. It has access to the complete process documentation, the legacy code, and the interview transcripts. Its tools are read-only: it can search and read, but not modify anything. This is a deliberate constraint. The agent is an information system, not an actor.

The central design decision: one question, one answer. No scope creep, no speculation. The more focused the task, the more reliable the answer.

It searches sources in priority order: first the validated process documentation, then the legacy code, then the raw transcripts. When it is uncertain, it says so explicitly. When sources contradict each other, it names the contradiction.

We use the agent throughout the development process: in specification, story creation, test planning, and implementation. For complex questions, it is queried multiple times on different sub-aspects. The answers are then consolidated.

What We Learned

AI Finds What Humans Overlook

During the code analysis, the AI discovered a field mapping error that had been in the system for years. The domain expert was unaware of it. The bug had no impact on the settlement amounts but corrupted stored traceability data.

Contradictions as a Feature

We explicitly instructed the AI during synthesis to look for contradictions between interview statements and code. At several points, the domain expert described a different logic than what was implemented in the code. Occasionally, statements from different interviews on the same topic also contradicted each other. We documented these contradictions and clarified them in follow-up interviews. Some were genuine errors. Others were deliberate deviations with historical reasons. In every case, the knowledge became more precise.

Humans could never have absorbed this volume of raw material and systematically checked it for contradictions at this level of detail.

AI Also Gets Things Wrong

The AI got a central calculation logic wrong. The domain expert recognized it, gave feedback, and we incorporated it. Overall, we estimate that about 80 percent of the business relationships were captured correctly on the first pass. The remaining 20 percent had to be corrected in validation loops. The AI delivers a draft, the expert provides feedback, and the documentation becomes more precise.

Plan for Review Capacity

AI produces a lot of content very quickly. 56 process diagrams, 38 files, hundreds of pages of specifications. Human reviewers have limited absorption capacity. You need to deliberately plan for the fact that domain experts cannot review everything at once, and divide reviews into manageable portions.

This is not an AI-specific problem. Artifacts created by humans are also skimmed rather than reviewed. What matters is the format: worked examples with concrete numbers force careful examination, while process diagrams are easy to rubber-stamp.

Worked Examples as a Validation Tool

One thing we should have done earlier: create worked examples and have the domain expert review them. At unclear points, we created worked examples and compared them with the domain expert.

This was a breakthrough. A worked example with concrete numbers makes misunderstandings immediately visible: Is this amount correct? Is a step missing here? Why is the result negative?

In the future, we would work with examples from the start, possibly using Example Mapping as a structured method. Not just as a supplement to the documentation, but also as a basis for test-driven development.

How It Feels in the Project

Developers as Knowledge Engineers. In our migration project, there were no product owners and no business analysts. The developers took on the role of knowledge extractors, supported by the AI. It structures the knowledge, finds contradictions, prepares relationships. What stays with humans: evaluating results, prioritizing questions, making decisions. In a different context, such as a greenfield project with multiple stakeholders, AI would complement a BA or PO role, not replace it.

Onboarding: The Barrier Drops. One colleague was only in the project one day per week. Normally, this would be difficult with domain complexity of this level. With the domain knowledge agent, he could ask business questions at any time and orient himself in the domain. He remained productive without having to pull a colleague out of their work synchronously.

A colleague at the insurance company with several years of industry experience rated our domain understanding as unusually deep for the short project duration. This is because the knowledge is available in a form that enables systematic study. The entry barrier to the domain drops. But it does not disappear.

The Domain Expert Does Not Feel Threatened. He was impressed, not concerned. His knowledge is being documented and made accessible for the first time. He is no longer the sole bottleneck. This is a relief for him too. During the interim presentation, we joked about naming the agent after him.

Outlook: What We Don't Know Yet

We are describing an approach that worked in a specific project. It is important to name the conditions that favored this success. And the open questions that arise from it.

Why this project was a favorable case. Few domain experts with consistent knowledge. A clearly bounded legacy system. No conflicting stakeholder interests. When multiple domain experts have contradictory ideas or processes need to be defined rather than migrated, solutions for that need to be found.

The next level: The domain expert without a mediator. Currently, we as developers are the interface between domain expert and AI. This works but scales only to a limited extent. A conceivable next level would be an interface where the domain expert interacts with the AI independently and the documentation grows iteratively. Whether this works without technical guidance is an open question.

Scaling to larger systems. The system being replaced was manageable. The domain knowledge agent works because the model can navigate the content in a targeted way. For larger codebases or multiple business domains, the question arises whether a single agent is still sufficient or whether specialized agents per business area are needed. The knowledge base itself also grows with every interview and every correction loop. At what point does the context become too large to remain precise?

Four Recommendations

For those who want to take a similar path, here are four things we took away from our project:

  1. Involve the real experts, not proxies. Above all, talk frequently and regularly with those who use the system daily. The implicit knowledge lives in the minds of the users, not in existing documentation. And plan enough time for interviews.
  2. Build the accessible domain from day 1. Structured by processes, with a glossary and worked examples. This is not documentation overhead but the central artifact of your project. The foundation on which humans and AI work together. Start with the worked examples, not the diagrams.
  3. Lower the barrier to domain knowledge. The accessible domain makes domain knowledge accessible to all project participants. The speed comes when business questions no longer have the domain expert as a bottleneck but go through the domain knowledge agent. This does not replace roles, but it relieves the people behind them.
  4. Keep the responsibility with humans. AI lowers the barrier to domain knowledge. It replaces neither the domain expert nor the critical eye of the team. Use it as a drafting engine and information system. But review, correct, and decide yourself.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.