SEEKING SEED INVESTMENT

Today's servers see
AI agents as attacks.
We see them as the future.

An HTTP server written from scratch in Rust. Zero heap allocations on the hot path. Built for a web where bots already outnumber humans.

0
HEAP ALLOCATIONS
per request on hot path
205K
REQ/SEC SERVING
wrk benchmark, small files
<1ms
P99 TAIL LATENCY
at 80% max load
51%
WEB = BOTS
of all internet traffic (Imperva 2025)
Scroll
01 — The Problem

Web servers weren't designed for 39,000 requests per minute from a single client.

An AI agent doing its job can fire thousands of HTTP requests per second. To nginx, that looks like a DDoS. The connection gets rate-limited or dropped.

Existing servers allocate memory on every request, box async futures onto the heap, and set rate limits around human browsing speeds. That made sense ten years ago. It doesn't anymore.

AI agent traffic grew 1,300% between January and August 2025. The servers haven't kept up.

WEB TRAFFIC COMPOSITION — 2025
Automated / Bot Traffic 51%
Human Browser Traffic 49%
87%
of agent page views are product pages (HUMAN Security)
39K
req/min from a single fetcher bot (Fastly Threat Insights)
1,300%
AI agent traffic growth, Jan–Aug 2025 (HUMAN Security)
42%
of heap from per-connection I/O buffers (hyper #1790)

Every request. Every time.
Allocate, copy, free.

What happens inside a typical HTTP server on every request. Even "hello world" pays this tax.

typical_rust_server.rs — hyper + tokio + tower
// Every request triggers this cascade:
BytesMut::reserve()     ← heap alloc for read buffer
String::from(method)    ← heap alloc for method
Uri::from_parts()       ← heap alloc for URI
HeaderMap::new()        ← heap alloc for headers
Box::pin(future)        ← heap alloc per task
Arc::new(body)          ← heap alloc + atomic refcount
tower::Layer::call()    ← heap alloc per middleware

// Result at 10K concurrent connections:
// Memory: 3MB → 1.1GB (actix-web #1946)
// Latency: p50 ok, p99.9 = 💀
// Agents: mistakenly rate-limited
synapserve.rs — zero-allocation architecture
// Same request. Zero allocations:
Span { off: u16, len: u16 }  ← 4 bytes, stack, Copy
parse(buf, &mut req)        ← stack-allocated output
[Header; 64]               ← stack array, 640 bytes
io_uring provided buffers  ← kernel picks from pool
io_uring SPLICE            ← zero-copy to socket
thread-per-core            ← no Arc, no mutex, no Send
proxy_pass → upstream pool ← keepalive reuse, no alloc

// Result at 10K concurrent connections:
// Memory: flat ±5% over 30 minutes
// Latency: p99.9 < 1ms
// Proxy: load balanced, auto-failover
02 — The Solution

Not a fork. Not a wrapper.
A new server
for agent traffic.

We didn't patch nginx or wrap tokio. We wrote an HTTP server from scratch, assuming most clients are software, not people.

Zero-Allocation Hot Path

Parsed requests live on the stack. Spans instead of string copies. The kernel manages I/O buffers directly via io_uring. No allocator pressure, no latency spikes.

0 alloc/req verified by counting allocator

Agent-Native Protocols

Architecture designed for IETF Web Bot Auth (Signature-Agent headers), MCP Streamable HTTP, Google A2A discovery, and SSE token streaming. Agents are the primary design target, not an afterthought.

MCP + A2A + SSE planned — Q2 2026

Predictable Tail Latency

Thread-per-core with io_uring. No Arc, no Mutex, no Send bounds. Each core owns its connections and buffers. p99.9 stays flat because there's nothing shared to contend over.

<1ms p99.9 at 80% max throughput
03 — Technology

Rust. io_uring. Zero-copy.
Here's how it works.

SYNAPSERVE ARCHITECTURE
incoming request (AI agent, 39K req/min)
KERNEL
io_uring multishot accept
kernel → provided buffer ring
1 syscall → N conns zero-copy recv
SECURITY
TLS 1.3 handshake → kTLS
rustls handshake in userspace
kernel-mode encryption via kTLS
TLS 1.3 early data support
handshake once kernel encrypts HTTP + HTTPS
USERSPACE
synapserve-http-parser
Span{off:u16, len:u16} on stack
[Header; 64] stack array
SIMD-accelerated scanning
0 alloc 4-byte spans 640 B total AVX2 / SSE4.2 / NEON
handler(&req, &mut writer)
Static files / reverse proxy routing
Agent identification
Adaptive backpressure
direct buffer write Signature-Agent L = λW
reverse proxy & upstream pool
keepalive connection reuse
weighted load balancing
health tracking & failover
0 locks 3 algorithms auto retry splice relay
KERNEL
io_uring SEND_ZC + SPLICE
static files & upstream relay
kernel → socket 0 userspace copies
response — zero-copy, sub-millisecond
Rust

Memory safety without a GC. The borrow checker enforces zero-copy at compile time. No runtime, no VM.

io_uring

Linux 6.1+ required. macOS/Windows support planned post-1.0. Multishot accept, provided buffer rings, zero-copy send. One submission queue per core — minimal syscall overhead.

Thread-per-core

Each core owns its connections, buffers, and allocator. No locks, no contention. Scales linearly with cores.

SIMD parsing

AVX2/SSE4.2/NEON accelerated header scanning with runtime detection. 16-32 bytes checked per cycle. Combined with span output — parse results never leave registers.

kTLS

TLS 1.3 via rustls, then kernel takes over encryption. SEND_ZC and SPLICE remain zero-copy through the TLS layer.

What we're replacing, and why.

NGINX

Industry standard for 20 years. Written in C with manual memory management. 6 memory-corruption CVEs in 2024 including use-after-free (CVE-2024-24990) and buffer overwrite (CVE-2024-32760). No native agent protocol support.

HYPER / AXUM

Good libraries on tokio's async runtime. Both use httparse for HTTP/1.x parsing (Axum transitively via Hyper). Pin<Box<dyn Future>> per connection (tower #753). 42% of heap traced to per-connection I/O buffers (hyper #1790).

ACTIX-WEB

Battle-tested framework, also built on httparse (via actix-http). Memory grows to ~1.1GB under 16K connections and never returns (#1946, #1780). Actor model adds overhead for simple request-response patterns.

All three Rust frameworks share the same HTTP/1.x parser: httparse. Our parser is faster — see the head-to-head benchmarks.

One YAML file.
TLS, proxy, vhosts, done.

TLS termination, virtual hosts, reverse proxy with load balancing, SPA mode, pre-compressed assets. One config file, no modules to install.

synapserved.yaml
server:
  listen_https: "0.0.0.0:443"
  listen_http: "0.0.0.0:80"       # HTTP + HTTPS on one instance
  workers: auto
  tls:
    cert: /etc/ssl/certs/fullchain.pem
    key: /etc/ssl/private/privkey.pem

upstreams:
  api_backend:
    balancer: least_conn
    servers:
      - addr: "10.0.1.10:3000"
        weight: 5
      - addr: "10.0.1.11:3000"
        weight: 5
      - addr: "10.0.1.12:3000"
        backup: true             # failover only
    keepalive: 64
    next_upstream: [error, timeout, http_502]

hosts:
  api.example.com:              # reverse proxy to backend pool
    locations:
      "/v1/":
        proxy_pass: api_backend
        proxy_set_header:
          X-Real-IP: "$remote_addr"

  app.example.com:              # static files + SPA
    root: /var/www/app
    spa: true                     # 404 → index.html
    compression:
      enabled: true
      precompressed: [br, gzip]
04 — Benchmarks

Numbers talk.
Same hardware, same files, same tool.

STATIC FILE THROUGHPUT — wrk, 256 connections, 60s, 8 workers
small.json 118 bytes
synapserve
205,682 req/s
554µs
nginx
114,902 req/s
1.78ms
caddy
43,730 req/s
4.74ms
medium.json 4,005 bytes
synapserve
178,541 req/s
644µs
nginx
109,958 req/s
1.88ms
caddy
38,398 req/s
5.25ms
large.json 24,203 bytes
synapserve
104,362 req/s
1.13ms
nginx
93,530 req/s
1.81ms
caddy
36,774 req/s
5.29ms
+79%

faster than nginx on small files. 205K vs 115K req/s.

+62%

faster than nginx on medium files. 179K vs 110K req/s.

554µs

median latency. 3.2x faster than nginx p50 of 1.78ms.

14.5MB

RSS under load. 8 workers, 256 connections, zero-allocation serving.

TEST ENVIRONMENT
HARDWARE

Intel Core i7-8550U @ 1.80GHz
4 cores / 8 threads
8MB L3 cache
Linux 6.17.0-14-generic (Ubuntu)

METHODOLOGY

wrk -t4 -c256 -d60s --latency
10s warmup per server per file size
8 workers, ETag & compression disabled
All servers localhost, same data directory

05 — Market Opportunity

The web server market hasn't changed since nginx.

F5 paid $670M for nginx in 2019 — and that was for the human web. Now every cloud provider, every AI startup, every enterprise running agents needs HTTP infrastructure that actually understands agent traffic. The API gateway market alone hits $11B by 2030.

API Gateway for Agent Traffic

Drop-in replacement for nginx/Envoy at the edge. Reverse proxy with keepalive pooling, load balancing, and automatic failover.

LLM Inference Serving

Zero-copy SSE streaming for token delivery. SynapServe targets the HTTP transport layer for inference APIs — not inference compute itself.

Agent-to-Agent Infrastructure

Agents talking to agents over MCP and A2A. The server should help, not block them.

What stops someone else from building this?

  • None of the top web servers use io_uring for network I/O. Nginx (32.7% market share), Cloudflare/Pingora (26.6%), and Apache (24.2%) all run on epoll. Envoy (1.5%) has opt-in io_uring but it's off by default. LiteSpeed and H2O use it for disk I/O only. SynapServe is built on io_uring end-to-end: accept, read, parse, send, splice. (W3Techs, Feb 2026)
  • kTLS is rare and always opt-in. Only 3 of 8 major servers support kernel TLS at all: nginx (opt-in since 2021, requires custom OpenSSL build), H2O (opt-in), HAProxy (experimental since Nov 2025). Apache has an architectural blocker in mod_ssl. Envoy and Caddy have open but stalled requests. Pingora can't — BoringSSL doesn't support kTLS. SynapServe uses rustls for the handshake, then offloads to kTLS — zero-copy SEND_ZC and SPLICE stay intact through the TLS layer.
  • Agent protocol standards (MCP, A2A, Web Bot Auth) are moving targets; shipping native support while specs evolve creates compounding first-mover advantage
  • Open-core community lock-in: once teams build config and operational knowledge around a server, switching costs are real (the nginx playbook — F5 paid $670M for software most users ran for free)
  • Linux-only via io_uring is a constraint but also a moat — the servers handling serious agent traffic run Linux
ROADMAP
COMPLETED
Zero-allocation HTTP/1.1 parser

Span-based parsing, SIMD scanning, chunked decoding. Benchmarked at target throughput.

COMPLETED
io_uring I/O layer

Multishot accept, provided buffer rings, zero-copy send, connection slab allocator.

COMPLETED
HTTP/1.1 server framework

Handler trait, radix-tree router, virtual hosts, static file serving with ETag/Range/Brotli.

COMPLETED
Reverse proxy & upstream load balancing

Keepalive connection pooling, weighted round-robin / least-conn / IP hash, peer health tracking, automatic retry with next-upstream failover, backup servers, zero-copy splice relay, DNS re-resolution.

COMPLETED
TLS 1.3 with kernel TLS (kTLS)

rustls handshake with kernel-mode encryption offload. Parallel HTTP + HTTPS listeners. TLS 1.3 early data support.

Q1 2026
SSE streaming

Native SSE for LLM token streaming. Zero-copy event dispatch.

Q2 2026
Agent protocols (MCP, A2A, Web Bot Auth)

Native MCP Streamable HTTP, Google A2A discovery, IETF Signature-Agent verification.

Q3 2026
HTTP/2

Full HTTP/2 with bounded flow control and stream multiplexing.

Q4 2026
HTTP/3 (QUIC) + managed cloud offering

Full QUIC support via s2n-quic. SynapServe Cloud: managed agent gateway as a service.

Why now

Three things changed
at the same time.

1

AI agents are the new browsers

51% of web traffic is now non-human. OpenAI, Anthropic, Google, Meta are all shipping agents that make HTTP calls. The shift already happened.

2

Protocols are being standardized

IETF Web Bot Auth, Anthropic's MCP, Google's A2A. The standards for agent-to-server communication are being written right now. Early support matters.

3

io_uring has matured

Linux 6.1+ finally has the primitives: multishot accept, provided buffer rings, zero-copy send. You couldn't build this server two years ago.

06 — Business Model

The server is free.
Running it at scale is not.

The core ships under an open-source license. Anyone can download it, deploy it, build on top of it. 80% of enterprise IT leaders are increasing open-source adoption — nobody pays for an HTTP server they haven't run in production first.

Money comes when teams go from “one engineer testing it” to “running it across the fleet.” That's when they need health checks, access control, dashboards, cloud integrations, and someone to call at 2 AM. That's the paid layer. Even at 1–5% conversion rates, open-core infrastructure companies have built multi-billion dollar businesses.

OPEN SOURCE

SynapServe Core

Free

The full server. Not a demo, not a crippled version — the real thing.

  • HTTP/1.1, HTTP/2, HTTP/3
  • Zero-allocation parser & io_uring I/O
  • TLS 1.3 with kTLS
  • Static files, reverse proxy, load balancing
  • YAML config, single-node
  • Community support
This is where every customer starts. Downloads, GitHub stars, community — the top of the funnel.
SELF-HOSTED, PAID

SynapServe Enterprise

Per-instance / yr

Everything in Core, plus the stuff ops teams won't run without.

  • Automatic HTTPS — certs issued, renewed, and redirected without config
  • Service auto-discovery — watches Docker & K8s APIs, routes update on deploy
  • Active health checks & zero-downtime reload
  • Agent auth & signature verification
  • SSO, OIDC, LDAP & role-based access
  • Audit logs & compliance
  • Multi-node management dashboard
  • Kubernetes Ingress Controller & Helm operator
  • AWS, GCP, Azure integration
  • Priority support
This is where the money is. Annual subscriptions, per instance. The same model that built $670M and $6.4B exits in open-source infrastructure.
MANAGED, PAID

SynapServe Cloud

Per-request

We run it. You send traffic. Pay for what you use.

  • Everything in Enterprise
  • Fully managed edge deployment
  • Agent traffic analytics & observability
  • Multi-region, auto-scaling
Revenue grows with agent traffic. As customers scale, we scale with them.

This model has exits.

Open-core infrastructure companies get acquired at 10–26× revenue — well above the 3.7× software median. Once teams build config, tooling, and operational knowledge around a server, they don't leave. The best open-core companies run 130–140% net dollar retention.

Where the money is

We start in API gateways and Kubernetes ingress — 93% of organizations already use or evaluate K8s. As agent traffic grows, we expand into cloud-native infrastructure and AI inference serving.

API Gateway
20.8% CAGR
$4.3B
$20.2B
2024 → 2033 • Global Growth Insights
K8s Ingress Controller
20.1% CAGR
$1.1B
$5.7B
2024 → 2033 • GrowthMarketReports
Cloud-Native Infrastructure
29.4% CAGR
$11.3B
$41.1B
2025 → 2030 • Mordor Intelligence
AI Inference Gateway
14.3% CAGR
$3.9B
$9.8B
2024 → 2031 • Market.us
07 — Founder

8 years in fintech. Every system I built
hit the same wall under load.

Evgeniy Sukhanov

Technical Solo Founder

20 years in IT, the last 8 in fintech — trading platforms, payment gateways, the kind of systems that page you at 3 AM. Every stack I worked on hit the same allocation wall under load. At some point I stopped patching around it and wrote a server that doesn't have the problem.

First hires with the seed round: two senior Rust engineers to accelerate HTTP/2 and agent protocol work. Actively seeking a co-founder with go-to-market experience in infrastructure or developer tools.

BACKGROUND
20 years in IT

Last 8 in fintech infrastructure

WHAT I BUILT
Trading platforms

Plus payment gateways and everything around them

WHY THIS
Same wall

Every stack hit the same allocation bottleneck

08 — Investment

F5 paid $670M for nginx.
That was for the human web.

The agent web needs different plumbing. We're raising $1.5M to ship HTTP/2-3, add agent protocol support, and land first production deployments.

RAISING
$1.5M

Seed round to fund 18 months of development and first production deployments

STAGE
Working
Product

HTTP/1.1 server, TLS 1.3 with kTLS, reverse proxy with load balancing, and io_uring I/O layer — built and benchmarked

USE OF FUNDS
Ship
& Scale

HTTP/2-3, agent protocols, cloud offering, and first enterprise partnerships

Happy to talk. We reply fast.