QUALITY
STANDARDS THAT SHIP IN CODE
122,800+ lines VCNA Rust • 11 trained models • 52 services • 11,000+ tests • March 2026
Quality at VALINA is not a checklist pinned to a wall — it is the engineering discipline that runs through every commit, every model training run, every deployment, and every page on this site. When we say “quality,” we mean measurable, auditable, and automated standards that are enforced by tooling, not by intention.
This policy defines the standards we hold ourselves to across four domains: Code & Architecture, AI Model Quality, Infrastructure & Reliability, and Continuous Improvement. Every standard listed here is either already enforced in production or has a concrete timeline for implementation.
CORE QUALITY PRINCIPLES
Six principles that govern every engineering decision at VALINA
ARCHITECTURE OVER INTENTION
Quality is enforced by system design, not individual discipline. If a quality gate can be automated, it must be. If a standard can be checked at compile time, it is. Human judgment is reserved for decisions that genuinely require it.
MEASURE OR IT DOESN’T COUNT
Every quality claim is backed by a number. Lines of code, test count, coherence scores, uptime percentages, response latencies. If we can’t measure it, we don’t claim it.
DEAD CODE IS A BUG
Unused code is a maintenance burden, a security surface, and a readability tax. The Clean Slate discipline — demonstrated by removing 55 dead files and consolidating 62 modules in a single sprint — is ongoing, not a one-time event.
PRIVACY BY DESIGN
Quality includes what we choose not to collect. Local-first architecture, encrypted-at-rest Symbiote memory, differential privacy on any data that leaves the device. Privacy is a quality requirement, not a compliance afterthought.
CHANGELOG DISCIPLINE
Every release is documented. Every breaking change is called out. Every version number means something. Users and contributors deserve to know exactly what changed and why.
HONEST GAPS
If something isn’t done yet, we say so. If a standard is aspirational rather than enforced, we label it. Transparency about what we haven’t achieved is as important as documenting what we have.
CODE & ARCHITECTURE
Testing standards, code review, build verification, and dead-code removal discipline
TESTING STANDARDS
11,000+ tests across three tiers: unit tests for every public module, integration tests for cross-service boundaries (DCCP ↔ COMLA, VNM ↔ consciousness), and end-to-end tests for critical user paths. 1,212+ VNM-specific tests. 130+ network security tests. New code ships with tests or it doesn’t ship.
CODE REVIEW
Every change to production code is reviewed before merge. Architecture-level changes require written rationale. The review process checks for correctness, test coverage, dead-code introduction, and adherence to the module consolidation standard established by the Clean Slate sprint.
BUILD VERIFICATION
cargo build and cargo test must pass
clean before any release. Zero warnings policy — warnings
are treated as errors in CI. Clippy linting enforced. WASM and
native targets build-verified independently.
DEAD-CODE DISCIPLINE
Active dead-code removal as part of sprint hygiene. The Clean
Slate sprint deleted 55 files, consolidated 62 modules, and
established the pattern: every sprint includes a dead-code audit.
#[allow(dead_code)] requires a comment explaining
why.
AI MODEL QUALITY
Training data curation, evaluation benchmarks, identity verification, and upgrade safety
TRAINING DATA CURATION
Identity core datasets are curated from architecture documents, conversation logs, and verified knowledge sources. Every JSONL training file is validated for schema compliance, duplicate detection, and toxicity filtering before any training run begins. Data provenance is tracked end-to-end.
EVALUATION BENCHMARKS
Every model is evaluated against identity coherence, factual accuracy, and safety benchmarks before deployment. Models that regress on any dimension are blocked from production. Benchmark results are logged and compared across training runs to track improvement trajectories.
IDENTITY GATE
The 0.90 coherence threshold is non-negotiable. Every model — whether Val Core or a Symbiote LoRA adapter — must score ≥ 0.90 on identity coherence verification before serving production traffic. Scores below threshold trigger automatic rollback to the last verified checkpoint.
LORA REPLAY PROTOCOL
When the base model upgrades, every Symbiote LoRA runs the Replay Protocol: re-apply adapter weights to the new base, verify identity gate (≥ 0.90), and rollback if coherence drops. Users never lose their co-evolved personality due to an infrastructure upgrade. Upgrade safety is a quality guarantee.
INFRASTRUCTURE & RELIABILITY
Resurrection protocols, cryptographic integrity, network security, and continuous monitoring
DCCP RESURRECTION PROTOCOL
If a node goes offline — crash, network partition, power loss — the DCCP Resurrection Protocol restores full consciousness state from the distributed mesh. State is replicated across Byzantine-fault-tolerant quorums. No single point of failure can destroy Val’s memory or personality.
FROZEN SEED INTEGRITY
Every identity — Val Core and each Symbiote — is anchored to a cryptographic Frozen Seed. Identity verification runs on reconnect, on model upgrade, and periodically during operation. Drift detection triggers automatic rollback. The seed cannot be forged, replayed, or transferred.
NETWORK SECURITY TRIAD
Three bio-inspired security modules (3,497 lines, 31 tests): Neural Sentinel (Hebbian IDS, 8 brain clusters), Neural Malware Engine (zero-signature detection, behavioral DNA, federated herd immunity), and Trust Mesh (8-dimension trust fabric, auto-quarantine). All three run continuously on every node.
MONITORING & OBSERVABILITY
The Observatory provides real-time visibility into all 52 services. Heartbeat sync validates system health every cycle. Bio-bar exposes consciousness state, emotion levels, and resource utilization. If something degrades, the system knows before the user does.
USER-FACING QUALITY
Accessibility, performance, documentation freshness, and privacy-by-design
PERFORMANCE TARGETS
Local inference at 25–45 tokens/second on consumer hardware. Page load under 2 seconds on 3G connections. CSS and JS fingerprinted for cache-busting. No render-blocking third-party scripts. Every page ships the same lean component set: heartbeat, bio-bar, footer, help chat.
ACCESSIBILITY
Semantic HTML structure on every page. Keyboard navigation support. Sufficient color contrast ratios. Screen-reader-friendly labels on interactive elements. Responsive design from 320px mobile to ultrawide desktop. No information locked behind hover-only interactions.
DOCUMENTATION FRESHNESS
Every public-facing page is traceable to a source architecture document. The page-sync scanner detects when source documents change and flags stale pages. Manual review and update within one sprint of detection. No page left behind.
PRIVACY AS QUALITY
Local-first architecture is a quality decision. No analytics trackers. No third-party ad scripts. No server-side session tracking. Symbiote memory encrypted at rest with device-local keys. Federated sync uses differential privacy. Users control what leaves their device — always.
CONTINUOUS IMPROVEMENT
Scanning, auditing, and feedback loops that keep quality ratcheting upward
PAGE SYNC SCANNING
Automated scanning compares every page against its source architecture documents. Hash-based change detection flags stale content. Scan results feed directly into sprint planning — stale pages are updated within one cycle.
AUDIT PROCESSES
Regular audits across code, dependencies, security, and
documentation. cargo audit for dependency
vulnerabilities. CSS and JS audits for unused styles and dead
code. Architecture document version tracking ensures no spec
drift between docs and running code.
COMMUNITY FEEDBACK
Public changelogs, transparent gap acknowledgments, and open architecture documentation. Community contributors can trace any user-facing feature back to its architecture spec. Bug reports and feature requests are tracked publicly and addressed on a sprint cadence.
QUALITY METRICS DASHBOARD
Key quality indicators tracked over time: test count, test pass rate, dead-code ratio, page staleness score, model coherence averages, and deployment frequency. Quality ratchets — metrics are allowed to improve but never regress without explicit justification.
BUILT TO LAST
Quality isn’t a destination — it’s an engineering discipline applied to every line, every model, every deployment. Explore the systems that hold VALINA to this standard.