Hadto Journal

Keet Notes · Chapter 5 · 2026-04-11

Competency questions need an authoring contract

Hadto’s latest ontology study pass points to a practical gap: a competency question is only operational if it is written in a form that can be turned into a repeatable check, query, or decision path.

ontology engineeringcompetency questionshadtooperations

A lot of ontology work sounds more rigorous than it is because the language is formal while the intake is still loose.

That is the useful lesson from Hadto’s latest reading pass through Chapter 5 Section 5.2.5 on competency questions. The gap is not whether teams know they should write questions. The gap is whether those questions are authored in a form the system can actually use.

That matters for Hadto because we are not building a library of nice internal notes. We are building operating infrastructure for future owner-operators.

A question is not useful just because it is well phrased

The usual competency-question workflow starts with a sensible idea: write down the questions the ontology should be able to answer.

That is directionally correct, but it still leaves a lot of room for drift.

Two people can ask what looks like the same business question in different ways. An LLM can generate a polished question that sounds relevant but does not map cleanly to the ontology. A reviewer can accept a question because it feels important even though nobody has defined what counts as an answer.

At that point, the competency question is acting more like a note than a contract.

What the current Hadto signal says

The current Keet study tracker shows that Hadto already does some of the right things:

  • it tracks competency questions as explicit artifacts
  • it validates answer-path structure
  • it filters malformed generated questions
  • it preserves lifecycle metadata around the questions it keeps

That is a meaningful base.

But the same tracker also shows the missing layer: the system still does not record controlled-language templates, query companions, or a durable mapping from question type to executable representation.

In plain English, Hadto can tell that a question exists and whether its answer path is shaped correctly. It still does not fully tell contributors how to write the question so it will become a reusable check instead of a one-off sentence.

Why this becomes a business problem

Hadto’s mission is to convert employees into business owners. That requires systems that can carry know-how forward without depending on the original author.

A loose question format breaks that handoff.

If competency questions are mostly free-form text, then every new contributor has to rediscover the unwritten rules:

  • what kind of question belongs in the ontology at all
  • what vocabulary to use
  • what level of specificity is acceptable
  • how the question should map to entities, relations, and answer paths
  • whether the question is meant to become a report, a validation check, a query, or a decision aid

That is not scalable authoring. It is institutional memory disguised as methodology.

An owner-operator platform cannot rely on that. If the goal is to teach people how the system thinks, the intake format has to do some of the teaching.

What an authoring contract would change

A real competency-question authoring contract would make the question more structured before it reaches validation.

In practice, that likely means each question should carry at least three kinds of discipline:

  1. Question type discipline
    The author should know whether they are asking about eligibility, exception handling, state, responsibility, timing, relationship, or another reusable category.

  2. Answer form discipline
    The system should know whether the question wants a yes/no determination, a list, a traceable path, a comparison, a missing-data signal, or something else.

  3. Execution discipline
    The question should already be close to the query or validation artifact it will become, rather than waiting for someone later to guess how to operationalize it.

That does not make ontology work less flexible. It makes it less ambiguous.

Why this helps both humans and AI

This is not only a governance improvement for human analysts. It is also how you make LLM-assisted ontology work safer.

If a model is allowed to generate free-form competency questions, it can easily produce plausible noise. If it is guided by templates and expected answer forms, it has a narrower job: produce candidate questions that already fit the platform’s operational shape.

That is a much better use of automation.

The platform should not ask an AI to invent the meaning of the business. It should ask it to propose candidate questions inside a clearly governed frame.

The public takeaway

The broader lesson is simple: a good ontology question should already know how it wants to be answered.

If the system only stores the sentence and cleans it up later, the real authoring standard still lives in people’s heads. If the question arrives with a clear type, expected answer form, and executable destination, it becomes part of a reusable operating language.

That is the kind of structure Hadto needs.

The platform is supposed to help domain experts become owners, not force every new operator to reverse-engineer the hidden habits of ontology authors. Competency questions are one of the places where that difference becomes visible.


Source evidence used in this note: smb-ontology-platform/docs/plans/2026-03-31-keet-ontology-engineering-progress-tracker.md (2026-04-11 entry for Chapter 5 Section 5.2.5) and existing Hadto blog posts reviewed to avoid duplicating prior notes on ontology methodology, pitfall scanning, and tooling discoverability. The upstream ONT-010 follow-through is referenced in the tracker but not yet available as a canonical public-facing issue document.

← Back to the blog