Engineering biological adaptability into deep learning architectures for non-stationary financial environments.
The fundamental failure mode of quantitative machine learning is the assumption of stationarity. Traditional models are trained on historical datasets, extracting patterns optimised for past regimes.
When markets undergo regime shifts from low-volatility trending environments to high-entropy chop, static models systematically fail. Retraining weights is insufficient. To survive, the model architecture itself must evolve.
typical model degradation within weeks of a regime shift
how fast modern markets shift between bull and bear regimes
algorithmic trading growing at 11.2% CAGR with zero adaptive solutions at scale
A 2-layer GRU neural network (112 hidden units) trained on CUDA-accelerated PyTorch. Hunts for directional alpha on a 4-hour prediction horizon. Raw directional accuracy: 56.1% vs 52.1% ARIMA baseline.
Dual-layer meta-labeling: an XGBoost classifier analyses market microstructure (spread, funding, volatility) to predict trade success probability. A deterministic ADX/RSI circuit breaker acts as a non-hallucinating safety floor.
Spawns 50 architecture mutants every 24 hours. Each is simulated against a rolling 30-day window, evaluated by Sharpe ratio, drawdown, and accuracy. The fittest topology is automatically deployed.
The Governor architecture filters low-probability signals in high-entropy regimes, executing only on asymmetric setups.
The deterministic circuit breaker ensures hard safety floors. Peak-to-trough never exceeded 12% across all 4 major regime shifts.
GPU-parallel evolutionary search ensures the active model is always adapted to the current 30-day volatility window.
Compiled via NVIDIA TensorRT for ultra-low latency execution in live production environments.
Self-taught machine learning researcher specialising in non-stationary time-series and financial regime detection. At 17, I began investigating why deployed ML models systematically fail when market conditions shift and spent the next three years building a solution from scratch.
That research became Proteus, now in its tenth iteration, an adaptive deep learning architecture that rewrites its own structural topology every 24 hours to survive regime shifts. Incorporated Latent Research Ltd in London in 2026 to commercialise this research.
Currently in pre-deployment validation phase. Seeking infrastructure partnerships to scale live testing.
Securing GPU credits via AWS Activate, Google for Startups, and NVIDIA Inception to resume live and paper trading. Infrastructure access is the critical path before deployment.
Third-party verification of the V10 live track record. Simultaneous deployment of the Intelligence API to 3–5 pilot family office partners for integration testing.
Full public release of the Intelligence API. Transition from stealth R&D to revenue generation. Concurrent development of multi-asset V11 architecture.
Parallel adaptive engines across 20+ crypto pairs. Cross-asset correlation engine. Portfolio orchestrator with Kelly Criterion position sizing. 1,000 mutants per 24h evolutionary cycle.
We are currently accepting infrastructure credit partnerships and evaluating a limited cohort of pilot API clients for Q3 2026.