AI Policy

Last Updated: February 17, 2026 Effective Date: Upon acceptance of the Master Service Agreement


1. Purpose

This AI Policy provides transparency about how artificial intelligence operates within Tacitus Systems infrastructure. It documents our commitments regarding model training, data usage, output ownership, and the boundaries of AI-assisted decision-making.

For definitions of capitalized terms, refer to the Master Service Agreement, Section 2.


2. The Four Pillars

Tacitus Systems operates under four non-negotiable principles:

2.1 No Training

We do not use Customer Data, Customer Content, Input, Output, or Vector Embeddings to train, fine-tune, distill, or otherwise improve AI models. This prohibition is absolute and extends to all sub-processors engaged by Tacitus Systems.

Models deployed within Tacitus Systems infrastructure operate in inference-only mode. Customer data is processed at query time and is not retained by the model or incorporated into model weights.

2.2 No Access

In Cortex Mode, Tacitus Systems has physically no access to Customer Data. The appliance operates without external network connectivity. No outbound connections to Tacitus Systems, model providers, or any third party are possible.

In Cloud Bridge Mode, Customer Data resides in dedicated, encrypted volumes. Tacitus Systems does not access these volumes except when explicitly authorized by Customer for support purposes or when required by applicable law.

2.3 No Cloud Dependency

Cortex operates independently of any internet connection. All AI inference, document processing, OCR extraction, and vector search occur locally on the Customer’s hardware. No external API calls are made during any stage of the processing pipeline.

Software updates are delivered via the Supply Drop protocol, which preserves the air-gap. Customer controls when and whether to apply updates.

2.4 No Compromise

Security is not a feature; it is the architecture:

  • Encryption keys are hardware-sealed in TPM 2.0 modules, bound to the physical motherboard.
  • Storage is RAID 1 mirrored across NVMe drives for fault tolerance.
  • Unprocessed documents exist only in volatile memory (tmpfs) and vanish on power loss.
  • Each instance generates mathematically unique document identifiers, preventing cross-tenant correlation.

3. Model Provenance

3.1 Model Sources

Tacitus Systems deploys open-source and open-weight large language models for inference. Models are selected based on performance benchmarks, context window capacity, and VRAM efficiency for the target hardware tier.

Model categories by service tier:

TierModel ClassParameter RangeContext Capacity
SoloMid-range instruction-tuned10-20B parametersStandard
CortexHigh-context specialist10-20B parametersExtended (up to 1M tokens)
Quad / ProFrontier-class70-120B parametersExtended

Specific model identifiers are documented in the deployment manifest provided with each Supply Drop.

3.2 Embedding Model

Document embeddings are generated using an open-source text embedding model producing multi-dimensional vectors, served via an API endpoint. The embedding model operates locally within the Customer’s infrastructure.

3.3 OCR Model

Optical character recognition is performed by an open-source structured OCR engine producing dual-payload output: markdown text and bounding box coordinates (JSON). This enables precise source highlighting in the user interface without reprocessing.


4. Inference Architecture

4.1 Local Inference

All AI inference runs locally:

  • Cortex: On the unit’s dedicated GPU hardware. No data leaves the appliance.
  • Cloud Bridge: On dedicated or shared GPU resources within the Customer’s single-tenant isolated instance. No data is sent to external LLM providers (OpenAI, Anthropic, Google, or any other third party).

4.2 No External API Calls

The inference engine is self-contained. At no point during the processing pipeline — ingestion, embedding, retrieval, generation, or response delivery — does the system transmit Customer Data, Input, or Output to any external endpoint.

4.3 Resource Management

The inference engine implements a “Chat is King” priority system: when a user submits a query, background document ingestion is throttled to ensure responsive inference. This is managed via a short-lived semaphore that pauses new OCR page processing until the active inference request completes.


5. Output Ownership

5.1 Customer Ownership

Customer retains 100% ownership of:

  • Input: All queries and prompts submitted to the system.
  • Output: All AI-generated responses, summaries, citations, and analyses.
  • Vector Embeddings: All mathematical representations derived from Customer Data.

Tacitus Systems claims no intellectual property interest in Output or Vector Embeddings. No license to these materials is granted to Tacitus Systems beyond what is strictly necessary to deliver the service.

5.2 No Derivative Claims

Tacitus Systems does not claim ownership of, or any license to, insights, conclusions, or work product that Customer derives from Output. The Customer’s use of Output is unrestricted.


6. Automated Decision-Making

6.1 Human-in-the-Loop

The Tacitus Systems AI assists human decision-making. It does not make autonomous decisions that produce legal effects or similarly significant effects on individuals.

All AI-generated Output is presented as a recommendation or analysis for review by a qualified professional. The system is designed as a tool for professionals, not a replacement for professional judgment.

The system’s Output does not constitute legal advice, medical diagnosis, engineering certification, or any other form of professional opinion. Customers are responsible for implementing appropriate human review processes before acting on AI-generated content.

6.3 GDPR Article 22 Compliance

Tacitus Systems does not engage in automated individual decision-making as defined by GDPR Article 22. The infrastructure is designed to augment human analysis, not to replace it.


7. Model Updates & Governance

7.1 Supply Drop Delivery

Model updates are delivered as part of the Supply Drop protocol:

  • All updates are cryptographically signed.
  • Customer verifies the signature before application.
  • Customer controls the timing and decision to apply updates.
  • No model update occurs without Customer’s affirmative action.

7.2 Model Change Notification

When a Supply Drop includes a change to the primary inference model (e.g., a new model version or a different model), the Supply Drop release notes will clearly document: (a) the nature of the change, (b) the expected impact on output quality, and (c) any changes to resource requirements.

7.3 Rollback

If Customer experiences degraded performance after a model update, Tacitus Systems will provide a rollback Supply Drop upon request.


8. Hallucination Acknowledgment

Large Language Models can produce outputs that are factually incorrect, internally inconsistent, or entirely fabricated. This is a known limitation of the technology, not a defect in the Tacitus Systems infrastructure.

Tacitus Systems mitigates hallucination risk through Retrieval-Augmented Generation (RAG): the system retrieves relevant passages from Customer Data and presents them alongside the AI’s response, with source citations and page references. This allows the professional user to verify the AI’s claims against the original documents.

Nonetheless, RAG does not eliminate hallucination risk. Customer acknowledges this limitation and agrees to implement appropriate verification procedures before relying on AI-generated Output. Tacitus Systems bears no liability for actions taken based on unverified AI output.


9. Contact

For AI policy inquiries or model governance questions:

Tacitus Systems Ul. KrĂłtka 7 97-200 TomaszĂłw Mazowiecki Poland Email: contact@tacitussystems.com