System Architecture
Verda is built to handle millions of events daily across unreliable networks, resource-constrained devices, and multiple regulatory jurisdictions. Here's how we do it.
Design Philosophy
Offline-First
Most of the world's farms don't have reliable internet. Our mobile apps work completely offline and sync when connectivity returns.
Event-Sourced
Every state change is an immutable event. Complete audit trail by design. Rebuild any point-in-time state from the event log.
Eventually Consistent
Strong consistency doesn't scale globally. We embrace eventual consistency with clear conflict resolution rules.
Defense in Depth
Cryptographic signing at every layer. Device, gateway, service, blockchain. Compromise one layer, others still protect data integrity.
System Layers
Layer 1: Edge Devices
IoT sensors (temperature, humidity, GPS), NFC readers, mobile apps. All data cryptographically signed at the edge before transmission.
Layer 2: Gateway & Ingestion
Regional gateways aggregate sensor data. Mobile apps sync through edge APIs. Protocol translation, deduplication, initial validation.
Layer 3: Event Processing
Kafka streams for real-time event processing. Complex event processing for anomaly detection, alerts, and derived events.
Layer 4: Business Logic
Domain services: Product Registry, Journey Composer, Certification Engine, Recall Manager, Analytics Engine. Microservices with clear bounded contexts.
Layer 5: Data Persistence
PostgreSQL for relational data. TimescaleDB for time-series IoT readings. IPFS for documents and images. Redis for caching.
Layer 6: Blockchain Anchoring
Batch anchoring to Blockchain every 60 seconds. Merkle trees for efficient verification. Rust programs for on-chain logic.
Layer 7: Consumer Interfaces
QR viewer web app, brand dashboards, regulatory portals, public API. Optimized for low-bandwidth, high-latency connections.
Event Journey
From Sensor Reading to Blockchain
- Sensor signs reading with device key (Ed25519)
- Gateway validates signature, adds gateway signature
- Event published to Kafka with exactly-once semantics
- Service validates, enriches, stores in TimescaleDB
- Batcher collects events, builds Merkle tree
- Root hash anchored to Blockchain every 60 seconds
Core Services
Product Registry
Master data for products, batches, and supply chain actors. Hierarchical ownership model. Multi-tenant by design.
Journey Composer
Assembles product journeys from discrete events. Handles batch splitting, merging, and transformation events.
Anomaly Detector
ML models for detecting temperature excursions, unusual patterns, potential fraud. Real-time scoring on event streams.
Recall Manager
Forward and backward tracing. Notification orchestration. Recall execution tracking and compliance reporting.
API Gateway
Rate limiting, authentication, request routing. GraphQL for dashboards, REST for integrations.
Anchor Service
Manages Blockchain interactions. Batch building, transaction submission, confirmation tracking, proof generation.
Deployment
Multi-Region
Primary region in Singapore (AWS ap-southeast-1). Secondary in Jakarta for Indonesian regulatory compliance. Disaster recovery in Sydney.
Kubernetes
All services run on EKS. Horizontal pod autoscaling based on Kafka lag. Blue-green deployments with automatic rollback.
Observability
OpenTelemetry for distributed tracing. Prometheus + Grafana for metrics. Loki for log aggregation. PagerDuty for alerting.
Performance
Events/day capacity
API p99 latency
Uptime SLA