AI Systems
The AI Layer
What AI actually does in AiBS, what it does not do, and how prompts, usage, and feedback are tracked inside the product.
ByColby Reichenbach
Overview
AI in AiBS is scoped as explanation and editorial support, not a free-form baseball oracle.
The public AI layer is intentionally bounded. On selected team, umpire, and game surfaces, a user can ask for a chart brief or baseball explanation and get a response grounded in AiBS data instead of a generic chat answer.
Behind that, the app also has a larger internal AI layer: prompt registry support, feedback capture, usage and token tracking, daily editorial generation, game-report generation, and admin analytics around AI behavior.
That is why the AI story here matters. The system is not just a text box pasted onto a dashboard. It is an operational layer with scope, feedback, and cost/performance visibility.
Public scope
The public AI experience is intentionally bounded.
AiBS keeps broader AI interfaces gated. The public layer stays attached to specific analytics and game-context surfaces because that is where the answer can remain inspectable. Copilot and Query Lab remain gated because they require a different support, safety, and product bar.
That boundary is part of the product discipline, not a sign that the AI layer is shallow. It is the opposite: the system is being scoped to the part that is most defensible in public use.
Operations
Usage and feedback are first-class signals.
The codebase now tracks internal AI usage across multiple workflows, not just direct chat. Article generation, report generation, and classifier paths all write usage signals so model choice, token cost, and performance can be evaluated later instead of guessed at.
Article-level feedback and analytics surfaces are part of that same idea. If AI is part of the product, it should be observable as a system, not treated as a black box.
