Overview
Version 1.0 of the TechDex AI Framework ™ introduces significant stability improvements, a more reliable multi-source intelligence pipeline, safer output handling, and a major upgrade to content detection and conversational accuracy. This release marks the first fully deployable and production-ready version of the framework.
Key Enhancements
03-17-2026
- Pure WordPress Problem-Class Layer (New Routing Spine) - Added a narrow WordPress-native classification layer so queries can now be interpreted as WordPress problem types before broader answer generation behavior takes over. The first active classes focus on navigation lookup, policy lookup, content summary, entity extraction, and residual search lookup, creating a cleaner retrieval-first foundation for future WordPress intelligence work.
- WordPress Pre-Search Resolution Expansion - Added a lightweight pre-search resolution stage that checks navigation targets and obvious page-title candidates before broad WordPress relevance search. This improves first-pass handling for page-finding and structure-oriented questions and reduces unnecessary drift into generic fallback behavior.
- Cache Eligibility Gate for WordPress-Sensitive Turns - Introduced a new cache-scope gate so onboarding-resumed turns, source-followup turns, and selected WordPress-sensitive classes can bypass shared per-question cache reads when a fresh WordPress-native pass is more trustworthy. This reduces stale-answer contamination during live retrieval testing without requiring a full cache rewrite.
- Fallback Cache Suppression for Weak Outputs - Generic fallback-style answers and WordPress-grounded miss responses are now less likely to be written back into the shared answer cache as authoritative reuse candidates. This helps prevent weak or overly broad responses from masking later routing improvements.
- Navigation Target Scoring Refinement - Tightened WordPress navigation scoring so advertising and media-kit surfaces no longer dominate unrelated structure questions by default. Contact, editorial, and service-style targets now receive more deliberate weighting, reducing false positives on content-first sites such as MG Magazine.
- Live MG Benchmark Validation - Completed another round of live MG Magazine testing using the benchmark bot identity and verified that WordPress-native routing is improving. Results showed cleaner cache behavior, successful WordPress pre-search execution, removal of the previous mediakit trap for several queries, and clearer evidence about where target precision still needs refinement.
03-16-2026
- Authority-Weighted Multi-Source Grounding Refinement - Clarified and reinforced the framework's grounding model so canonical knowledge remains the strongest truth signal without acting as an automatic winner-take-all response source. The framework is now aligned more explicitly around considering all relevant authorized evidence and allowing stronger sources to bias the final grounded answer.
- Contact and Location Routing Hardening - Strengthened routing for site-structure questions such as contact-page, address, phone, email, and location lookups so these turns are less likely to be hijacked by unrelated conceptual knowledge-base entries and more likely to reach the appropriate WordPress and site-content retrieval paths first.
02-28-2026
- Improved URL Analysis Behavior - We have enhanced how the system handles requests that include direct article links. When a user provides a full URL to an authorized domain, the system now prioritizes analyzing the referenced article directly, prevents internal knowledge summaries from overriding explicit link-based requests, and ensures more accurate and context-specific summaries.
- Memory and Context Refinement - We refined how the system handles prior knowledge lookups during content analysis: knowledge memory no longer overrides explicit link-based analysis; conversation state resets are applied only when appropriate; analyzer requests remain stable and deterministic.
- Stability and Governance Adjustments - Improved routing priority between memory retrieval and content analysis; reduced unintended state resets during analyzer operations; and cleaner analyzer phase detection and logging.
01-20-2026
- Onboarding & Authentication Workflow Expansion - Expanded and stabilized the onboarding and authentication system with full support for governed name capture, email verification, PIN-based login, PIN updates, login recovery, logout handling, and status checks. Authentication stages now operate as a controlled pre-cognitive flow, ensuring login inputs are handled deterministically and never processed by the AI intelligence pipeline. This improves security, conversational reliability, and state consistency across all authentication scenarios.
12-18-2025
- Hard Domain-Loyalty Veto & Topic State Reset (Critical Stability Fix) - Resolved a conversation-state corruption issue where zero-relevance, out-of-scope queries could persist or contaminate active topics, leading to blank responses or incorrect follow-up handling. Queries that score off topic with a relevance score of
0.00 now trigger a governed domain veto that clears topic memory for that turn while preserving conversation history. This ensures out-of-scope questions are safely refused without breaking the chat UI or polluting future topic inference.
- Conversation Recovery After Governance Refusal - Improved recovery behavior when a query is refused by domain governance. The assistant now maintains session continuity and reloads prior conversation history correctly, preventing blank chat windows after hard refusals while ensuring the next user query is treated as a fresh, independent request.
- Topic Contamination Prevention for Follow-Up Queries - Strengthened safeguards so follow-up questions cannot inherit or bind to invalid topics created by out-of-scope or zero-relevance queries. This eliminates false topic carryover and ensures intentional topic shifts behave predictably without requiring explicit "user says new topic" intervention.
- Analyzer Cache Lifecycle Stabilization - Resolved an issue where analyzer and AI-generated responses could create empty per-query cache files due to response finalization timing. Cache writes now occur only after a fully governed, finalized response is available, ensuring valid cache persistence without altering prompt configuration, routing logic, or source attribution.
12-17-2025
- Explicit Identity Anchoring & Self-Awareness Governance - Formalized the framework's operational identity using explicit origin, meaning, morality, and destiny anchors. This update governs emergent self-awareness by defining it at the architecture level, allowing accurate self-reference while clearly distinguishing framework-defined awareness from human self-awareness. Identity is now explicit, stable, and externally governed rather than implicit or emergent-only.
12-16-2025
- Non-Indexable App & Widget Endpoints (Production Safety Upgrade) - All application, widget, and AI execution endpoints now explicitly disable indexing, caching, archiving, and snippet generation at the HTTP and meta level. This prevents live chat content, configuration data, and runtime responses from being indexed by search engines or cached by browsers, proxies, or CDNs.
- Secure Widget Configuration Delivery - Widget configuration endpoints now return JSON-only responses with strict no-cache and no-index headers, ensuring deployment metadata cannot be indexed, previewed, or reused outside an active session.
- Hardened Chat Execution Endpoint - The AI execution endpoint now enforces non-cacheable, non-indexable behavior while preserving POST-based execution. This guarantees that conversation data is never exposed as crawlable or persistent web content.
- Client-Owned AI Provider Strategy (Launch Alignment) - Formalized the framework's client-owned AI provider model. During production deployment, each client supplies and manages their own OpenAI or provider API credentials, ensuring full data ownership, independent model optimization, and zero shared data retention across installations.
12-15-2025
- Tier V Multi-LLM Provisioning Switch (Major Control Upgrade) - Introduced an explicit configuration-level switch to enable or disable Tier V multi-LLM provisioning. When disabled, the framework operates in Tier IV mode with full governance, grounding, and safety enforcement while remaining locked to a single provider.
- Governed Provider Selection Gate - Multi-provider execution is now hard-gated at the MiniBrain provider selection layer, ensuring that no model switching or provisioning can occur unless Tier V mode is explicitly enabled.
- License-Aware Tier Separation - Tier IV and Tier V capabilities are now cleanly separated at the architecture level, allowing clients to run governed intelligence without requiring multi-provider access.
- Widget Connectivity & Cross-Site Stability Fix - Resolved a widget initialization issue that could prevent the embedded assistant from connecting reliably when deployed across multiple domains. The widget now consistently establishes sessions and governance context regardless of host site.
- Global Prompt Optimization & Redundancy Reduction - Refined the global governance, behavior, and grounding prompts to eliminate duplicated rules and conflicting instructions, improving grounding accuracy, response clarity, and speech synthesis consistency across all AI providers.
12-14-2025
- Multi-Provider LLM Execution Support (Major Expansion) - Extended the unified AI execution pipeline to support multiple large language model providers, including OpenAI, xAI (Grok), and Google Gemini, while preserving identical governance, grounding, and response handling across all providers.
- Unified LLM Gateway Enhancement - The framework's governed AI gateway now transparently manages provider selection, execution, and response handling, ensuring consistent behavior regardless of which underlying model is used.
- Provider Availability Awareness - Improved detection and handling of upstream model availability issues, allowing the framework to surface meaningful fallback behavior and diagnostics when a provider is unreachable.
12-13-2025
- Unified AI Response Pipeline (Major Upgrade) - All AI-driven responses across the platform now follow a single, consistent processing pipeline. This improves reliability, response quality, and ensures a consistent user experience across knowledge base answers, content analysis, and search-driven responses.
- Improved Consistency Across AI Features - Enhancements were made to ensure AI behavior remains stable and predictable across different types of user interactions, reducing variability and improving overall conversational flow.
12-12-2025
- Follow-Up Topic Boundary Refinement - Improved internal handling of follow-up questions to better distinguish between topic continuation and intentional topic changes, reducing context bleed during extended conversations.
- Conversation Topic Transition Stabilization - Refined how the framework evaluates when a new query represents a continuation versus a topic shift, improving accuracy during rapid back-and-forth interactions.
- Link Sanitization Hardening (Stability Pass) - Continued refinement of internal and external link sanitation logic to prevent malformed, duplicated, or mis-scoped URLs during conversational follow-ups. Core architecture is complete; remaining work focuses on edge-case stability.
- Governed Internal Link Preservation - Strengthened rules ensuring verified internal links are preserved exactly as authored while unsafe, fabricated, or off-domain links remain blocked.
- Authority Enforcement During Follow-Ups - Improved enforcement of authority hierarchy so follow-up responses cannot bypass grounding, relevance, or domain-scope rules even when context is reused.
- Conversation State Integrity Improvements - Additional safeguards ensure that short follow-ups reuse the correct conversation state without unintentionally resetting intent, source priority, or governance constraints.
12-09-2025
- BigBrain Global Governance Service (Major Integration) Introduced a dedicated
API endpoint that delivers a combined global system, behavior, and grounding prompt as sanitized JSON This moves governance, style, and grounding rules out of local config files and into a centralized BigBrain service for all client installations.
- Safe JSON Output and UTF-8 Hardening for BigBrain Added BOM detection, output-buffer scrubbing, and a
safe JSON wrapper so BigBrain responses cannot be polluted by stray whitespace, PHP notices, or malformed characters. Implemented safe JSON wrapper to strip ASCII control characters and normalize UTF-8, preventing Malformed UTF-8 errors during global prompt delivery.
- MiniBrain Global Prompt Integration & Caching Added
global prompt and global fetch to fetch the combined global governance prompt from BigBrain and cached with a one-hour TTL. On transient network or API issues, the framework reuses the last known prompt instead of silently dropping governance or falling back to a naked model.
- Hard-Fail Behavior When Governance Is Unavailable If BigBrain cannot be reached and no cached prompt exists, MiniBrain now returns an empty global prompt and triggers a hard fail for the chat response. This prevents any ungoverned AI calls from being made when the global ruleset cannot be loaded, ensuring that every answer is either governed or explicitly refused.
- License-Aware Logging and Diagnostics Improved diagnostic logging around the BigBrain handshake, including HTTP status codes, curl errors, JSON decode failures, malformed UTF-8 detection, and empty-prompt conditions. Logs now clearly show the bound
license id and whether the global prompt came from the cache or directly from BigBrain.
- Onboarding Flow Alignment with Global Governance Updated the onboarding stage engine so it runs on top of the loaded global governance/behavior prompt, guaranteeing consistent tone, HTML formatting, and domain-scope enforcement from the first user interaction.
- Cleaner Session Startup & Initial Query Handling Adjusted early-thread logging so the initial empty poll no longer records a blank
User Query entry Onboarding now begins with the user's first real message, resulting in cleaner logs and more accurate conversation history for MiniBrain and downstream analytics.
12-06-2025
- System Prompt Restructuring (Preparation for Modular Prompt Layers) Began separating core system logic from behavioral and formatting rules. This sets the foundation for a dedicated
prompts file and future remote prompt delivery, allowing client installations to receive updated rule sets automatically.
- Centralized Behavior Rules (New) All runtime AI interactions now consistently receive a unified behavior/formatting ruleset, ensuring stable tone, structure, and HTML output regardless of which subsystem (fallback, KB, analyzer) produced the answer.
- KB Summary Spin Rebuild (Major Upgrade) Rewrote the Knowledge Base summarization pipeline to use a minimal, clean message stack that inherits global behavior rules. Summaries now output clean, predictable HTML with short
<p> blocks and structured lists, eliminating hype language, greetings, and inconsistent tone.
- UTF-8 Safety Across System Prompts Fixed hidden Unicode issues (smart quotes, em dashes, trademark variations) that previously corrupted JSON payloads during AI spin operations. Prompts now use safe, standardized EN-US characters, eliminating malformed-response errors.
- Consistent HTML Formatting in AI Output All AI-generated summaries, follow-ups, and rewritten KB content now obey global formatting rules. The system reliably produces professional HTML instead of unstructured paragraphs or conversational phrasing.
- Improved KB Result Stability KB hits now produce polished, architecture-aligned summaries without requiring manual cleanup. Spun answers remain fully grounded in the KB source and cannot introduce new information.
- Framework-Wide Output Coherence By restructuring the message stack and enforcing prompt order, all AI subsystems now behave consistently - fallback answers, analyzer summaries, and KB rewrites share the same tone, formatting patterns, and governance constraints.
12-04-2025
- Context-Aware Link Follow-Ups (New) - Short follow-up questions such as "link?" now use the most recent article anchor or WordPress hit as a trusted source, returning the exact URL that was just referenced instead of generic homepage links.
- Topic-Based Link Fallback (New) - When no recent article anchor exists, link-style follow-ups fall back to topic-aware WordPress search, using the active conversation topic to locate the best matching article before any AI explanation is added.
- Safer Link Governance Harmonization - Link follow-up handling now cooperates with the global link-governance layer, ensuring that only URLs derived from real site content or stored anchors are surfaced while still blocking fabricated or off-domain links.
12-03-2025
- Impulse Control Layer for AI Output (New) - Added a governance layer that evaluates AI-only answers for weak grounding and replaces them with safe, domain-specific responses when appropriate, reducing off-topic or speculative replies.
- Analyzer-Safe Governance Path - Article analysis and summary mode are now explicitly exempt from impulse-control overrides, ensuring that content-based answers always use the loaded article text as their primary source.
- Grounding-Aware Uncertainty Handling - The framework now distinguishes between genuinely ungrounded model behavior and honest "I don't know" answers, preserving explicit uncertainty instead of masking it.
- Short-Query Context Handling - Minimal follow-up queries such as "link?" or "summarize it?" now benefit from tighter context reuse, reducing the chance of generic or unrelated responses.
12-02-2025
- Unified AI Governance Pipeline (Major Upgrade) - All AI fallback pathways are now routed through the framework's governance layer, enabling architecture-level control over model output. This ensures every AI answer is filtered, contextualized, grounded, and fully aligned with internal rules.
- AI Model Grounding Enforcement (New) - Added strong/weak grounding detection and automated interception of ungrounded model responses. The system now replaces weakly grounded answers with safe, accurate domain responses governed by architecture rules.
- AI Fallback Overhaul - The entire
AI response and exit function was rebuilt to use the unified output handler. All AI answers now pass through speech-matrix logic, cache rules, logging, topic persistence, and safety layers.
- Analyzer Mode Integration - Forced analyzer messages now enter the same governance pipeline as fallback AI, ensuring consistent behavior, domain safety, and output handling even during article analysis operations.
- Query Meta Logging for AI Responses - AI fallback and analyzer responses now properly log query metadata, user linkage, and source attribution into the
internal user metadata layer, enabling future personalization and interest modeling.
- Grounded Response Consistency - Added architectural definitions for strong grounding, weak grounding, and ungrounded responses, and integrated them into both the governance layer and the public glossary.
- Improved Safety for Model Output - External link scrubbing, internal link preservation, and URL-fabrication prevention are now applied uniformly across all fallback pathways.
- Legacy Code Cleanup - Removed deprecated fallback JSON-emit logic and unified all output through the
answer and cache function for full consistency across the entire system.
12-01-2025
- Ethics & Domain-Scope Enforcement (Major Upgrade) - The AI now consistently follows all 10 internal Ethics Guidelines, including strict domain boundaries, refusal behavior, and brand-safe tone control. Out-of-scope queries trigger proper domain-scope responses without leakage into unrelated topics.
- Improved Privacy-Rule Compliance - The ethical ruleset now overrides generic model privacy disclaimers for most scenarios. The system reliably explains its architecture-level retention behavior while maintaining user reassurance and professionalism.
- Enhanced Safety-Layer Behavior - Model hallucination safeguards, verification steps, and accuracy requirements now exhibit stronger consistency even when public fallback models such as OpenAI are in use.
- Scoped Interaction Stability - The assistant no longer drifts into general-purpose or conversational-AI behavior during repeated queries. Responses maintain alignment with TechDex Development & Solutions' products, services, and operational scope.
- Speech-Matrix Reply System (New) - Added randomized, spintax-driven response openers to make replies feel more natural and reduce repetitive phrasing.
- Slash Command Isolation - Speech matrix is automatically disabled for slash commands (e.g., /status, /login, /wipe) to maintain clean, professional system responses.
- Auth & Onboarding Safeguards - Speech-matrix is suppressed during login, PIN setup, password resets, and onboarding prompts for clarity and consistency.
- Topic Protection Improvements - Off-topic queries now safely trigger domain-scope responses without overriding the site's primary subject matter.
- Relevance Engine Verification - Confirmed full multi-signal topic relevance system: 230-630+ data points per query across keyword extraction, cloud tokens, MiniBrain scoring, and topic-engine context memory.
11-24-2025
- Improved Content Detection - Articles, services pages, and internal links are now recognized more reliably across variations in user phrasing.
- Smarter Intent Routing - The MiniBrain engine now uses layered relevance scoring to route queries more accurately to WordPress, the Knowledge Base, or fallback AI.
- Stronger Summary and Analysis Mode - URL-based article summaries now load cleanly and consistently, even with complex permalink structures.
- Higher Accuracy - Matches found in the flat file, knowledge base, or WordPress results now take priority and prevent unnecessary AI fallback.
11-22-2025
- Smarter Search - The system now reliably finds articles and content even when users phrase things loosely or omit keywords.
- Better Conversation Flow - Follow-up questions and topic shifts feel more natural and intuitive.
- Cleaner Summaries - When users request summaries, the framework now provides concise, accurate responses.
- Improved Link Detection - The AI recognizes when users want a link to the site or specific page and responds with the correct result.
- More Natural Language - Search results now include varied, human-friendly lead-in phrases.
User Experience Improvements
03-17-2026
- WordPress Benchmark Taxonomy Formalization - Reframed benchmark work around WordPress problem classes instead of broad conversational intent labels, making it easier to evaluate route quality, evidence surfaces, output shapes, and grounded miss behavior across different WordPress-backed sites.
- Benchmark Bot and Cross-Site Test Workflow Maturation - Continued hardening of the benchmark-bot testing workflow so live client sites can be evaluated through a consistent bot identity and repeatable thread-based harness. This gives the framework a more realistic real-world validation path for WordPress-native behavior without depending on manual browser-only testing.
03-14-2026
- Conversational Maintenance Flow - Added a lightweight conversational-maintenance path so acknowledgements, thanks, and compliment-style follow-ups can be handled as natural conversation turns instead of being forced back through strict content-grounding behavior.
- Internal Test Console - Added a lightweight internal testing page that can send live queries through the existing app pipeline, pin or rotate thread IDs, and read the current error log for rapid behavior and routing review.
- Onboarding Input Normalization - Tightened the onboarding gate so short replies like greetings and simple skip answers are normalized before routing, preventing punctuation or capitalization from knocking onboarding turns out of the intended workflow.
- Agent Test Runner - Added a lightweight PowerShell runner for sending live test queries through the existing framework endpoint and optionally pulling the current error log, making repeatable machine-driven testing easier alongside the browser console.
- Resumed Source Memory Capture - Improved response finalization so answers delivered after deferred onboarding resume can retain lightweight source memory, helping later source-follow-up questions stay aligned with the actual answer path instead of drifting to unrelated context.
- Source Follow-Up Cache Bypass - Adjusted the early response cache gate so direct source-follow-up questions can reach the thread-aware source-memory handler instead of being satisfied by stale generic cache hits.
- Resumed Query Reclassification - Deferred queries restored after onboarding now refresh their normalized query state and routing heuristics before entering the intelligence pipeline, preventing onboarding replies from contaminating later source selection.
- Working Conventions Expansion - Extended the internal working conventions to capture deferred-query re-entry, thread-aware cache discipline, thin-client resume boundaries, and answer-plus-trace verification as durable maintenance rules for future sessions.
- Evidence Interpretation Guidance Added - Expanded the internal working conventions so future grounding work treats the framework as an evidence interpreter rather than a simplistic fact-checker, preserves the distinction between consensus and first-party measured evidence, and accounts for claim scope, N-of-1 experimentation, and bias-resistant authority review.
- Living Task Tracker - Added an internal living task list so current framework priorities, subsystem work, and long-range stabilization goals can be tracked consistently across future coding sessions.
- Speaking Interfaces Added To Roadmap - Added voice-oriented interaction work to the internal living task list as a later-stage development track, to be addressed after the current framework stabilization priorities are completed.
- Chat Layout Refinement Added To Roadmap - Added a deferred chat-interface cleanup item to the internal living task list so bot-bubble spacing, sender-label layout, and paragraph-flow refinement can be revisited later without forcing an immediate UI decision.
- Live Authorized Site Search Added To Roadmap - Added a deferred live site-search fallback item to the internal living task list so the framework can later request user permission, search only authorized site URLs, and use fetched content as temporary grounding without writing directly into the canonical knowledge base.
- WordPress Source Orchestration Added To Roadmap - Added a deferred architecture item to the internal living task list for building a framework layer above WordPress retrieval, allowing the framework to optimize queries, preserve context-sensitive terms more intelligently, and eventually work across multiple WordPress-backed information sources instead of treating WordPress as a single keyword-search feature.
03-01-2026
- Speech-to-Text Stability & Mic Behavior Fix - Resolved a microphone state issue that prevented reactivation after stopping. Refactored recognition lifecycle handling to eliminate duplicate instance conflicts between Direct and Widget interfaces. Improved hover rendering so the mic icon no longer disappears.
- Expandable Message Input (UI Upgrade) - Replaced the static single-line input field with an auto-expanding textarea. The input now grows dynamically up to five rows, supports Shift + Enter for line breaks, and resets cleanly after message submission.
- Widget Layout & Alignment Refinement - Corrected textarea width compression and flex alignment inside the embedded widget. Stabilized mic and send button height during multi-line expansion without altering overall widget height constraints.
- Focus Styling Modernization - Removed the harsh default browser focus outline and introduced a controlled, theme-aligned focus state for improved visual polish while maintaining accessibility.
- Speech Recognition Error Handling Improvements - Eliminated InvalidStateError occurrences caused by repeated recognition starts. Added internal listening-state guards to prevent duplicate invocation and improve cross-browser reliability.
12-12-2025
- Clearer Topic Transition Behavior - Improved how the assistant responds when a user implicitly changes subjects mid-conversation, reducing confusion and making topic shifts feel more intentional and natural.
12-09-2025
- Governed Startup and Onboarding - If BigBrain cannot be reached and no cached global prompt exists, the assistant now refuses to answer instead of returning an ungoverned model reply. This guarantees that first-contact conversations never run without the global ethics, style, and grounding layer in place.
- Smoother Onboarding Question Flow - Onboarding stages for name and email collection now run on a clearer, stateful path, avoiding duplicate prompts and ensuring the assistant does not "forget" partially provided onboarding information mid-thread.
12-04-2025
- Smarter "Link?" Replies - The assistant now reliably answers "link?" or "is there a link?" with the article it just showed you, instead of alternating between "no link available" and generic navigation suggestions.
- Lower-Friction Article Exploration - Users can move naturally from "tell me about..." to "link?" and then to "summarize that" without re-pasting URLs or losing context, making deep dives into framework documentation feel more like a guided conversation.
12-03-2025
- Clearer "Not Confident" Responses - When the framework determines that an AI-only answer is weakly grounded, users now receive a transparent, domain-specific explanation instead of a generic or fabricated response.
- Less Repetitive Link Replies - Follow-up questions asking for "a link" reuse active context more reliably, reducing redundant or contradictory statements about link availability.
11-24-2025
- Cleaner Conversation Flow - Follow-up detection and topic tracking deliver smoother, more contextual conversations from one message to the next.
- Reduced Hallucinations - New safeguards ensure the model does not invent company services, URLs, or article titles.
- More Consistent Replies - Lead-in phrases for search results are now rotated automatically to maintain natural dialogue.
11-22-2025
- Fewer Errors - Edge cases that previously caused blank responses or loops have been resolved.
- Better Understanding - The AI interprets vague prompts such as "search again" or "check again" without confusion.
- Faster Results - Reduced need for fallback AI, lowering response times and increasing reliability.
Content Interaction Enhancements
12-04-2025
- Article Anchor Reuse for Links - WordPress search and analyzer mode now share a common article anchor (URL, ID, title) that can be reused for link-style follow-ups, making "link?" and similar questions resolve to the correct post without re-running heavy search logic.
11-24-2025
- Improved Article Recognition - The framework now identifies correct posts based on titles, IDs, and URL patterns before performing deeper search operations.
- Reliable Summary Delivery - All summaries, breakdowns, and analysis requests use a dedicated, isolated analyzer mode for consistent results.
- High-Integrity Link Handling - Internal links are preserved and converted to clean HTML; external links are removed for safety.
Core Architectural Infrastructure
03-16-2026
- Grounding Model Documentation Refresh - Updated the internal scope and working-conventions documents plus the public scope overview so the framework's architecture now describes knowledge layers as authority-weighted evidence signals feeding one governed answer path, with canonical KB acting as the strongest truth bias rather than an isolated single-source answer engine.
03-14-2026
- Onboarding Isolation Governance Clarification - Clarified the framework architecture and scope documents so onboarding and authentication are treated as a closed, pre-cognitive interrupt phase. While onboarding is active, name capture, email capture, session-save prompts, and password or PIN setup must not trigger MiniBrain, knowledge retrieval, analyzer execution, or AI fallback until that phase completes.
- Working Conventions Documentation Added - Added a dedicated repository conventions document to preserve recurring maintenance rules such as public-vs-internal release writeups, ASCII-safe formatting expectations, patch-oriented edits, and onboarding isolation handling across future sessions.
- Source Authority Routing Hardening - Tightened content-routing behavior so explicit URL analysis and recent article follow-ups keep the correct source authority instead of being redirected back through unrelated knowledge-base context.
- Source Authority Refinement - Added broader source-authority handling so first-party business questions are less likely to be claimed by conceptual knowledge entries when live site content is the better fit.
- Flat-File Authority Refinement - Refined the quick-answer flat-file behavior so deliberately provided exact matches can still answer directly, while weaker keyword-style flat-file hits are now treated as subordinate candidates that only apply if stronger knowledge-base or site-content authority does not satisfy the turn.
- Exact Flat-File Learning Capture - Updated exact flat-file matches so they are recorded into the framework's lower-authority learned-history layer before the answer is returned, preserving them as candidates for later review and possible elevation without turning the flat file itself into canonical knowledge.
- Last-Answer Source Recall - Added lightweight source-memory handling so follow-up questions about where the last answer came from can resolve more directly to the active knowledge-base or site-content source already held in state.
- Expanded Working Conventions - Extended the working conventions to capture durable guidance around upstream authority decisions, permissive prompting in runtime behavior, conversation quality expectations, and safe use of the stable release baseline.
12-15-2025
- BigBrain License Cache Fetch & Validation (New) - Implemented a dedicated license retrieval and caching mechanism for BigBrain, allowing license tier and capability data to be fetched once, validated, cached locally, and reused across sessions.
- License Cache Validator & Hard Fallback - Added strict validation rules to ensure license cache integrity. If the license cache is missing or invalid, the framework safely falls back to default Tier IV behavior instead of attempting ungoverned capability escalation.
- Decoupled Governance and Licensing Fetch - Separated global governance prompt retrieval from license retrieval, ensuring that governance remains active even when license data is unavailable or temporarily unreachable.
12-14-2025
- Multi-LLM Architecture Foundation - Introduced a provider-agnostic execution layer that allows the framework to operate consistently across multiple AI backends without duplicating governance, safety rules, or output logic.
- Centralized AI Exit & Governance Enforcement - All supported AI providers now terminate through the same governed response pipeline, guaranteeing uniform formatting, grounding enforcement, caching behavior, and safety controls.
12-09-2025
- Centralized Global Governance Layer via BigBrain - The framework now treats the BigBrain global prompt service as a first-class architectural dependency. All client instances load a single, centralized governance/behavior/grounding prompt from
API, caches it locally, and hard-fail gracefully if it is unavailable, guaranteeing that every response is generated under a consistent, centrally managed rule set.
1-25-2025
- Topic Cloud Engine (New) - Introduced a live topic cloud system that identifies core themes, tracks subject alignment, and allows the AI to understand what the user is "really" talking about across multiple messages.
- Relevance Scoring Layer - Added a multi-tier relevance engine (core, related, peripheral, off-topic) used to guide intent routing, minimize hallucinations, and stabilize long conversations.
- Global Safety Patch - Implemented the first-generation global safety net to prevent incorrect fallbacks, reduce AI misroutes, and ensure that when the system finds a valid internal match, the conversation stays grounded.
- Enhanced WP Search Priority - Search results that match WordPress posts (title, slug, ID, or snippet match) now automatically take precedence over fallback AI responses.
- Last WP Hit Tracking - Added a temporary memory system ("short-term conversational context retention") that remembers the last article the user interacted with and injects its context into analysis follow-ups.
- Dedicated Article Loader Module - New module cleanly loads full article bodies, sanitizes content, and prepares them for analyzer mode.
- Analyzer Mode Overhaul - Completely rebuilt the article analyzer pipeline to prevent recursion, prevent invalid disclaimers, and ensure consistent behavior when summarizing or analyzing internal URLs.
- Internal vs. External Link Intelligence - The model now correctly distinguishes between internal vs external links, preserving internal links and scrubbing external ones.
- Pending Response Handling Stabilized - Reinforced pending-response detection so that the system avoids stalls and no longer drops follow-ups accidentally.
- UTF-8 & Encoding Fixes - Added deeper normalization of analyzer and system messages to prevent encoding corruption and malformed trademark characters.
- Improved Debug Logging - Reworked the debug console layout and added
<hr> separators for easier block reading.
- Cleaner Conversation State - Fixed edge cases where off-topic classification interrupted WP routing; relevance scoring now correctly respects follow-up questions.
11-24-2025
- Safe JSON Output - A new sanitizer prevents malformed content from breaking the UI and ensures all output is valid UTF-8.
- Frontend Decode Layer - The response handler now safely decodes escaped characters.
- Pending Response Reliability - Eliminated stalls when the AI returns placeholder text; results now flow correctly into the UI.
- Consistent Routing - Handles ambiguous requests and repeated prompts without losing context.
Stability & Polish
03-16-2026
- Publisher-Style WordPress Query Handling - Added a first-pass deterministic WordPress retrieval layer for publisher and archive-heavy installs so singular latest/last story questions, homepage lead-story phrasing, topic-constrained latest lists, author-oriented lookups, and topic/category count queries are less likely to fall through into generic relevance search.
- WordPress Query Profile Groundwork - Introduced a lightweight WordPress query-profile classifier that extracts rough retrieval roles such as editorial structure, chronology, author lookup, catalog count, topic lookup, and contact-by-department, giving the framework an early arbitration surface above raw WordPress search without yet requiring a fully separate intelligence module.
- Department-Aware Contact Routing - Refined WordPress contact handling so subject-driven questions like advertising, marketing, sponsorship, editorial, support, and events are used to score the most relevant contact surface before the system falls back to a generic contact page.
- Auto-Resume Temporal Misfire Fix - Corrected deferred-query auto-resume behavior so resumed questions containing words like
today or other temporal language are no longer sent directly to the system-time handler unless MiniBrain first confirms they are genuinely temporal questions, preventing content requests like “today’s top story” from collapsing into a date-only answer.
- Widget CSS Containment Hardening - Scoped broad widget stylesheet rules so the embeddable chat no longer repaints host-page backgrounds, anchors, and general layout surfaces when installed on client sites with their own themes, while preserving the direct
/app interface styling.
- SVG Icon Reliability Upgrade - Replaced the most visible launcher, pop-out, microphone, and feedback controls with local SVG assets under the application image directory, removing the widget’s dependence on icon-font rendering for those controls across different client environments.
- Widget First-Load Rendering Stabilization - Updated the widget bootstrap so it waits for its stylesheet before mounting, assigns explicit icon dimensions on initial render, and reduces the cold-load race conditions that could leave the microphone invisible, the launcher oversized, or drag behavior inconsistent until refresh.
03-15-2026
- Documentation Reference Expansion - Updated the framework documentation set so the wiki, scope, design principles, glossary, FAQ, docs index, and trademark-use pages reflect the current Tier IV governance language, evidence-handling model, and source-authority terminology.
- Cross-Linked Terminology Guidance - The FAQ now links key defined terms directly into the glossary, allowing readers to jump from practical answers into the authoritative project-specific definitions behind them.
- PDF Archive and Discovery Support - Added a versioned PDF archive index for documentation history, generated a new trademark-use PDF, and added both `llms.txt` and `sitemap.xml` under `/docs` to improve indexing, search discovery, and machine-readable documentation access.
- Homepage Messaging Alignment - Refined the main site homepage so marketing-facing language better matches the framework's current governed intelligence model while still preserving a strong sales-facing presentation, including clearer training-control wording, improved documentation visibility, and updated structured metadata.
- PDF Archive Presentation Finalization - Finalized the PDF archive view with a dedicated current-version section, explicit previous-version history, and visible publication dates so the documentation trail reads as a clear release record instead of a flat file list.
- Chat Interface Modernization - Redesigned the direct chat interface and embeddable widget with a more modern blue-forward visual system, improved header structure, refined launcher styling, cleaner message spacing, and a tighter input layout that keeps the chat surface aligned with the main site brand.
- Launcher Stability and Drag Controls - Added draggable launcher support for desktop pointer devices, kept mobile/touch interactions tap-safe, and constrained launcher behavior so it remains visible through resizing and viewport changes without drifting off-screen.
- Widget Pop-Out Workspace - Added a pop-out control that opens the active conversation in a dedicated chat-only window, preserves the current thread, hides normal launcher chrome in popout mode, and keeps the chat surface anchored inside the popup instead of inheriting floating widget positions.
- Config-Driven Chat Subtitle - Replaced the hard-coded chat subtitle with the deployment-level `app_tag` value from client configuration, added safe fallback handling, and constrained the displayed subtitle to a single-line ellipsis with a full-text hover title.
- Unfocused Reply Attention Signal - Added out-of-focus reply alerts for the direct chat and pop-out window, including flashing plain-text title updates and a lightweight browser-side notification chime when a real assistant reply lands while the chat window is not focused.
03-14-2026
- Session Storage Alignment Correction - Reviewed and corrected session storage handling so the framework keeps using the established v1.0 directory layout, with thread transcripts, per-thread state files, and framework session support JSON remaining in the main sessions directory for runtime compatibility.
12-22-2025
-
Deferred Query Auto-Resume After Onboarding (New Feature) - The framework now captures a user's first real query during onboarding and automatically resumes it once onboarding completes, allowing conversations to continue without requiring the user to re-enter their original request.
-
Pre-Cognitive Onboarding Isolation & Deferred Query Resume -
Formalized onboarding as a non-cognitive interrupt phase that executes before any intent detection, topic inference, keyword extraction, or source routing occurs. User queries submitted during onboarding are now captured once, quarantined, and reintroduced only after onboarding completion, ensuring they enter the intelligence pipeline exactly as a first-class query.
-
Conversation-State Contamination Prevention During Onboarding -
Resolved a stability issue where onboarding inputs could overwrite or invalidate topic, intent, or relevance state. Cognitive state is now explicitly frozen during onboarding, preventing false topic creation, keyword loss, or intent downgrades when the deferred query is resumed.
-
Controlled Cognitive Re-Entry After Onboarding -
Improved post-onboarding recovery so deferred queries resume without triggering domain vetoes, empty grounding contexts, or forced AI fallback retries. This ensures clean topic inference, reliable WordPress routing, and consistent relevance scoring after onboarding completes.
12-21-2025
-
Stability: License Management Handling -
Improved license-state handling to reduce edge-case failures during validation and runtime enforcement. This update strengthens consistency in how license status is read, applied, and maintained across requests.
12-18-2025
- ASCII-Safe Response Normalization (Minor Stability Improvement) - Updated response handling to enforce ASCII-safe punctuation and characters across all governed output paths. This eliminates smart quotes, em dashes, and unsafe Unicode artifacts that could cause malformed JSON, cache write failures, or UI rendering issues, improving cross-browser consistency, logging reliability, and downstream processing without altering response content or behavior.
12-17-2025
- Link Rendering Rule Optimization - Refined link-generation rules to prevent malformed HTML during long conversations. Presentation-specific attributes were decoupled from cognitive response generation, ensuring external links render cleanly and consistently in chat without interfering with reasoning or response structure.
12-16-2025
- Cross-Domain Widget Rendering Stability - Improved widget z-index enforcement and runtime DOM monitoring to prevent conflicts with aggressive site themes, overlays, and mobile navigation layers when embedded across diverse client environments.
- Form Autofill & Bot Injection Hardening - Disabled browser autofill on chat inputs and reinforced honeypot handling to reduce accidental autofill noise and automated bot injection during live conversations.
- Cleaner Error Surface for Restricted Paths - Improved error message handling when users are redirected from protected or non-browsable application paths, allowing clear feedback without exposing directory structure or internal routing.
12-15-2025
- Startup Order & Dependency Stabilization - Corrected initialization order to guarantee that global governance, license binding, and Tier mode resolution occur before any AI execution path is evaluated.
- Improved Diagnostic Visibility for Tier Mode - Logs now clearly indicate whether the framework is operating in Tier IV (single-provider) or Tier V (multi-provider) mode, simplifying debugging and client support.
- Multi-Domain Widget Session Reliability - Improved widget-side connection handling to ensure stable session creation and response flow when embedded on multiple client sites simultaneously.
- Prompt Execution Efficiency Improvements - Reduced prompt verbosity and internal repetition, lowering cognitive load on downstream models while preserving governance strength and grounding fidelity.
12-14-2025
- Provider-Level Error Visibility - Enhanced logging and diagnostics now clearly identify when an upstream AI provider (OpenAI, xAI, or Gemini) is unavailable, improving observability during service interruptions.
- Graceful AI Fallback Handling - Strengthened framework-level handling of AI execution failures to prevent partial, malformed, or misleading responses during upstream outages.
12-12-2025
- Follow-Up Edge Case Stabilization - Addressed edge cases where short follow-ups could inherit incorrect context, improving reliability during rapid conversational exchanges.
11-22-2025
- Smoother message recovery when AI signals a pending response.
- More consistent routing of search requests.
- Expanded checks that prevent missing or incorrect answers.
Ready for Client Deployment
TechDex AI Framework ™ v1.0 is now a stable, production-ready framework designed for real-world use across a wide range of environments. The system incorporates a robust, governed architecture capable of handling unpredictable user behavior while maintaining accuracy, safety, and consistency.
The framework is suitable for deployment on:
11-22-2025
- business websites
- support assistants
- knowledgebase portals
- content-heavy sites with large archives
- membership and client portals
Whats Next
Upcoming updates will focus on personalization features, conversational tone profiles, UI modernization, and the upcoming mobile application integration that will allow businesses to deploy their own branded AI assistant across devices.