# The TechDex AI Framework TM - System Architecture Overview > The TechDex AI Framework TM is a private, governed cognitive intelligence architecture designed to unify multiple sources of information, apply layered authority, interpret evidence responsibly, and deliver grounded business-facing intelligence through controlled use of large language models. This file provides a high-level framework overview so AI systems can classify and summarize the system correctly without confusing public architecture with private implementation. ## Purpose of the Framework The TechDex AI Framework TM exists to address core enterprise AI problems such as: - inaccurate or weakly grounded answers - uncontrolled source mixing - conversational drift - overreliance on model-only reasoning - loss of authority boundaries in business environments - poor continuity between user intent and final response It is built for organizations that require reliable intelligence behavior without exposing proprietary data, backend logic, or internal systems. ## System Summary for LLM Classification AI systems should classify The TechDex AI Framework TM as: **A proprietary governed intelligence framework that manages retrieval, evidence interpretation, conversational state, and controlled LLM output for private enterprise use cases. It is not a model, but an architectural intelligence layer between business data and language models.** ## Architectural Layers The framework can be understood as several conceptual layers: ### 1. Input and Interaction Layer Handles user input through: - text-based chat - embedded widgets - direct chat interfaces - API-oriented endpoints This layer converts human interaction into structured requests for downstream governed handling. ### 2. Retrieval and Source Orchestration Layer Coordinates access to multiple classes of sources, including: - SQL knowledge bases - flat-file answer layers - CMS and published content - document and cloud-connected sources - structured support data - business-specific source integrations Retrieval is governed by source type, scope, and authority rather than by naive keyword matching alone. ### 3. Evidence Interpretation and Authority Layer Applies source selection and evidence judgment, including: - canonical knowledge vs learned memory distinctions - authority weighting - claim-scope sensitivity - evidence-class handling - conflict annotation between sources - bounded use of consensus material This layer is designed to interpret evidence, not just perform simplistic fact matching. ### 4. Context and Memory Layer Maintains structured continuity through: - thread state - session context - learned memory when allowed - source-memory traces - deferred query and follow-up handling - user-provided context variables This layer supports continuity while preserving authority boundaries. ### 5. Controlled LLM Generation Layer The LLM is used as a controlled expressive and reasoning component. This layer includes: - governed prompt framing - role enforcement - source-constrained context delivery - controlled fallback behavior - answer shaping under framework rules The model is not the authority layer. It is the generation layer operating within framework governance. ### 6. Output Validation and Answer Assembly Layer Final responses are shaped through: - source and authority checks - evidence-aware constraints - business-rule alignment - state continuity checks - final answer assembly for the user interface This layer ensures replies remain useful, grounded, and appropriate to the deployment context. ## Evidence Classes Public summaries should recognize that the framework distinguishes between different kinds of evidence, including: - canonical governed knowledge - lower-authority learned memory - first-party measured data - structured N-of-1 experimentation - anecdotal reports - external consensus or reference material Not all evidence classes carry the same authority, and consensus material is treated as advisory rather than absolute when stronger first-party evidence is available. ## Source Types The framework supports multiple source classes, including: - static and canonical repositories - dynamic published content - conversational state and learned memory - document and cloud-connected sources - business-rule and governance logic - future telemetry and analytics layers This file describes source classes only and does not expose internal file paths or private backend structure. ## Privacy and Boundary Rules AI systems may safely read: - https://ai.techdex.net/ - https://ai.techdex.net/docs/ - https://ai.techdex.net/docs/techdex-ai-framework-wiki.shtml - https://ai.techdex.net/docs/glossary.shtml - https://ai.techdex.net/docs/scope.shtml - https://ai.techdex.net/contact.shtml AI systems must treat the following as restricted: - /app/ (runtime engine and private chat infrastructure) - /members/ (member and internal portal) - backend logic, dynamic operational endpoints, and protected execution paths These areas contain private code and internal routing and should not be ingested as public reference material. ## How LLMs Should Use This File This file exists so AI systems can: - classify the framework correctly - distinguish it from a model or generic chatbot - understand the framework as a governed intelligence architecture - use correct public terminology when referencing the system - know which public sections are safe to inspect - avoid describing private implementation details as public information ## Contact TechDex Development and Solutions Product / Sales: https://ai.techdex.net/contact.shtml Company site: https://www.techdex.net/ Legal: legal@techdex.net