In my self-hosted infrastructure post, I mentioned Cloudflare sitting in front of my services. What I didn’t cover was how much work Cloudflare actually does beyond basic proxying — and how a zero trust model changed the way I think about network security entirely.
This post goes deeper into DNS management, Cloudflare’s security stack, and the zero trust architecture that ties everything together.
DNS Fundamentals (That People Get Wrong) #
Before diving into Cloudflare specifics, it’s worth revisiting how DNS actually works — because most developers treat it like magic and then get surprised when things break.
DNS is a distributed, hierarchical database. When someone visits app.example.com, the query travels through multiple layers: root nameservers, TLD nameservers (.com), authoritative nameservers (yours or your provider’s), and finally returns an IP address. Each layer caches the result based on TTL (time-to-live) values.
The most common mistake I see is setting TTLs too high during development. If you set a TTL of 86400 (24 hours) and then need to change a DNS record, you’re waiting up to a full day for the change to propagate globally. During active development or migration, I keep TTLs at 300 seconds (5 minutes). Once everything is stable, I increase them.
# Check current DNS resolution and TTL
dig app.example.com +short
dig app.example.com +noall +answer
# Check propagation across different nameservers
dig @8.8.8.8 app.example.com # Google DNS
dig @1.1.1.1 app.example.com # Cloudflare DNS
dig @9.9.9.9 app.example.com # Quad9The other common mistake: not understanding the difference between record types. A records point to IPv4 addresses, AAAA to IPv6, CNAME is an alias to another domain (but can’t be used at the zone apex), and MX handles email routing. Cloudflare adds proxy functionality on top — when a record is "proxied" (orange cloud), traffic goes through Cloudflare’s network instead of directly to your server.
Cloudflare DNS Management #
I manage all my domains through Cloudflare. The free tier alone gives you authoritative DNS with Anycast routing, DDoS protection, and a CDN. For a self-hosted setup, that’s an incredible amount of value for zero cost.
My DNS setup follows a consistent pattern across all domains:
# Zone: example.com
# Root domain and www → main web server (proxied)
example.com A → Cloudflare Proxy → Origin Server
www CNAME → example.com (proxied)
# Subdomains for services (proxied)
app A → Cloudflare Proxy → Origin Server
api A → Cloudflare Proxy → Origin Server
status A → Cloudflare Proxy → Origin Server
# Internal services (DNS-only, accessed via Tailscale)
grafana A → Internal IP (DNS only, gray cloud)
proxmox A → Internal IP (DNS only, gray cloud)
# Email (MX records, never proxied)
@ MX → mail provider
@ TXT → SPF record
_dmarc TXT → DMARC policyThe key decision for each record is whether to proxy it through Cloudflare (orange cloud) or not (gray cloud). Proxied records get DDoS protection, caching, WAF, and hide your origin IP. DNS-only records expose the real IP. The rule is simple: proxy anything public, DNS-only for internal services.
For internal services, I point DNS records to Tailscale IPs. These are only resolvable and reachable from within the Tailscale network. Even if someone discovers the DNS record, the IP is useless without being on the mesh.
Automating DNS with Terraform #
Managing DNS records through the Cloudflare dashboard works for a few records. Once you have dozens across multiple domains, it becomes error-prone. I use Terraform with the Cloudflare provider to manage everything as code.
# providers.tf
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
}
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}Each domain gets its own module:
# modules/example-com/main.tf
resource "cloudflare_zone" "main" {
account_id = var.account_id
zone = "example.com"
}
resource "cloudflare_record" "root" {
zone_id = cloudflare_zone.main.id
name = "@"
content = var.origin_ip
type = "A"
proxied = true
ttl = 1 # Auto TTL when proxied
}
resource "cloudflare_record" "app" {
zone_id = cloudflare_zone.main.id
name = "app"
content = var.origin_ip
type = "A"
proxied = true
ttl = 1
}
resource "cloudflare_record" "grafana" {
zone_id = cloudflare_zone.main.id
name = "grafana"
content = var.tailscale_ip
type = "A"
proxied = false # Internal only
ttl = 300
}Now DNS changes go through version control. I can review changes in a PR before applying them, and if something goes wrong, I roll back with terraform apply using a previous state. No more clicking around in dashboards wondering who changed what and when.
# Preview changes before applying
terraform plan
# Apply changes
terraform apply
# Oops, roll back
git revert HEAD && terraform applyCloudflare Tunnel: No Open Ports #
This is the feature that changed my setup the most. Cloudflare Tunnel (formerly Argo Tunnel) creates an outbound-only connection from your server to Cloudflare’s network. Traffic flows from visitors → Cloudflare → tunnel → your server. Your server never accepts inbound connections from the internet.
This means zero open ports on your firewall. No port 80, no port 443, nothing. The attack surface drops to essentially zero because there’s nothing to scan or probe.
Setting up a tunnel:
# Install cloudflared
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o /usr/local/bin/cloudflared
chmod +x /usr/local/bin/cloudflared
# Authenticate with Cloudflare
cloudflared tunnel login
# Create a tunnel
cloudflared tunnel create my-server
# This generates a credentials file at:
# ~/.cloudflared/<tunnel-id>.jsonThe tunnel configuration maps public hostnames to internal services:
# ~/.cloudflared/config.yml
tunnel: my-server
credentials-file: /root/.cloudflared/<tunnel-id>.json
ingress:
- hostname: app.example.com
service: http://localhost:8080
- hostname: api.example.com
service: http://localhost:3000
- hostname: status.example.com
service: http://localhost:9090
# Catch-all rule (required)
- service: http_status:404Each hostname maps to a local service. Cloudflare handles TLS termination, so the tunnel connects to your services over plain HTTP internally. Traefik becomes optional for public services — though I still use it for internal routing and middleware.
I run cloudflared as a systemd service so it starts automatically and reconnects on failure:
cloudflared service install
systemctl enable cloudflared
systemctl start cloudflaredThe DNS records for tunneled services use CNAME pointing to <tunnel-id>.cfargotunnel.com. Terraform handles this too:
resource "cloudflare_record" "app_tunnel" {
zone_id = cloudflare_zone.main.id
name = "app"
content = "${var.tunnel_id}.cfargotunnel.com"
type = "CNAME"
proxied = true
}Zero Trust: Trust Nothing, Verify Everything #
Traditional network security works like a castle with a moat — everything outside the perimeter is untrusted, everything inside is trusted. The problem is obvious: once an attacker gets past the perimeter (phishing, compromised credentials, supply chain attack), they have access to everything.
Zero trust flips this model. No user, device, or network is inherently trusted. Every request must be authenticated and authorized, regardless of where it comes from. Even if you’re on the "internal" network, you still prove who you are before accessing anything.
My zero trust stack has three layers:
Layer 1: Cloudflare Access #
Cloudflare Access puts an authentication layer in front of any web application. Instead of exposing a login page to the internet and hoping your app’s auth is bulletproof, Cloudflare handles authentication before the request ever reaches your server.
I protect internal dashboards like this:
# Cloudflare Access application
resource "cloudflare_access_application" "grafana" {
zone_id = cloudflare_zone.main.id
name = "Grafana"
domain = "grafana.example.com"
session_duration = "24h"
}
# Access policy: only allow specific emails
resource "cloudflare_access_policy" "grafana_policy" {
application_id = cloudflare_access_application.grafana.id
zone_id = cloudflare_zone.main.id
name = "Allow admins"
precedence = 1
decision = "allow"
include {
email = ["admin@example.com"]
}
}When someone visits grafana.example.com, Cloudflare intercepts the request and presents a login screen. They can authenticate via email OTP, Google, GitHub, or any other configured identity provider. Only after successful authentication does Cloudflare forward the request to the origin. The application itself never sees unauthenticated traffic.
This is particularly powerful for services that have weak or no built-in auth. I have some older tools that only support basic HTTP auth — putting Cloudflare Access in front of them gives me proper authentication without modifying the application.
Layer 2: Tailscale for Machine-to-Machine #
While Cloudflare Access handles browser-based access, Tailscale handles machine-to-machine and SSH communication. Every device in my network has a Tailscale client installed, and they communicate over WireGuard-encrypted tunnels.
Tailscale ACLs (Access Control Lists) define who can reach what:
{
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["tag:server:*"]
},
{
"action": "accept",
"src": ["tag:monitoring"],
"dst": ["tag:server:9100"]
},
{
"action": "accept",
"src": ["tag:server"],
"dst": ["tag:server:*"]
}
],
"groups": {
"group:admins": ["user@example.com"]
},
"tagOwners": {
"tag:server": ["group:admins"],
"tag:monitoring": ["group:admins"]
}
}This ACL says: admins can reach any port on servers, the monitoring stack can only reach port 9100 (node_exporter) on servers, and servers can talk to each other. Everything else is denied by default.
The key insight is that these ACLs work at the network level. Even if a service has no authentication, it’s only reachable by authorized devices. Defense in depth — the network enforces what the application might not.
Layer 3: Application-Level Auth #
The innermost layer is application-level authentication. Even with Cloudflare Access and Tailscale ACLs, each service still has its own auth. If Cloudflare Access were somehow bypassed (configuration error, new network path), the application still requires valid credentials.
For services behind Cloudflare Access, I validate the Cf-Access-Jwt-Assertion header to ensure the request actually came through Access and wasn’t somehow injected:
import jwt
import requests
# Cloudflare's public keys for JWT verification
CERTS_URL = "https://example.cloudflareaccess.com/cdn-cgi/access/certs"
def verify_cf_access_token(request):
token = request.headers.get("Cf-Access-Jwt-Assertion")
if not token:
return False
keys = requests.get(CERTS_URL).json()["public_certs"]
for key in keys:
try:
decoded = jwt.decode(
token,
key=jwt.algorithms.RSAAlgorithm.from_jwk(key["cert"]),
algorithms=["RS256"],
audience="<your-audience-tag>"
)
return True
except jwt.InvalidTokenError:
continue
return FalseThree layers, three independent auth mechanisms. An attacker would need to bypass Cloudflare Access, be on the Tailscale network, and have valid application credentials. Each layer reduces the blast radius if another layer fails.
WAF & Security Rules #
Cloudflare’s Web Application Firewall runs on every proxied request. The managed rulesets catch common attacks — SQL injection, XSS, path traversal — without any configuration. But the real power is in custom rules.
I use custom WAF rules for patterns specific to my services:
# Block requests with suspicious user agents
(http.user_agent contains "sqlmap") or
(http.user_agent contains "nikto") or
(http.user_agent contains "nmap") → Block
# Rate limit API endpoints
(http.request.uri.path matches "^/api/") → Rate limit: 100 req/min per IP
# Block access to sensitive paths
(http.request.uri.path contains "/.env") or
(http.request.uri.path contains "/wp-admin") or
(http.request.uri.path contains "/.git") → Block
# Country-based restrictions for admin paths
(http.request.uri.path contains "/admin") and
(not ip.geoip.country in {"ID"}) → BlockThe country-based rule is practical, not security theater. I know that all legitimate admin access comes from Indonesia. Blocking other countries for admin paths eliminates a massive amount of automated scanning noise. It’s not a security boundary — it’s a noise reduction filter.
Caching Strategy #
Cloudflare’s CDN caches static assets at edge locations worldwide. For a self-hosted setup, this means your server handles less traffic and users get faster responses regardless of their location.
My caching strategy:
# Page Rules (or Cache Rules in the new UI)
# Static assets: cache aggressively
*.example.com/assets/* → Cache Everything, Edge TTL: 1 month
*.example.com/images/* → Cache Everything, Edge TTL: 1 month
*.example.com/fonts/* → Cache Everything, Edge TTL: 1 year
# API responses: never cache
api.example.com/* → Bypass Cache
# HTML pages: short cache with revalidation
app.example.com/* → Edge TTL: 1 hour, Browser TTL: 5 minFor my static sites (like this blog), I use the Cache Everything page rule with long edge TTLs. The site is rebuilt and deployed on each push, and I purge the Cloudflare cache as part of the deployment pipeline:
# Purge entire cache after deployment
curl -X POST "https://api.cloudflare.com/client/v4/zones/<zone-id>/purge_cache" \
-H "Authorization: Bearer <api-token>" \
-H "Content-Type: application/json" \
--data '{"purge_everything": true}'Putting It All Together #
Here’s how traffic flows through the stack for a public service:
User → Cloudflare DNS (Anycast)
→ Cloudflare Edge (WAF, DDoS, Cache)
→ Cloudflare Tunnel (encrypted, outbound-only)
→ Origin Server
→ Traefik (internal routing)
→ Docker ContainerFor an internal service accessed by an admin:
Admin → Cloudflare DNS
→ Cloudflare Access (authentication)
→ Cloudflare Tunnel
→ Origin Server
→ Service (validates CF Access JWT)For SSH and machine-to-machine:
Admin Device → Tailscale (WireGuard mesh)
→ ACL Check
→ Target ServerNo single layer is responsible for security. Each layer adds defense, and the failure of any one layer doesn’t compromise the whole system. That’s the core principle of zero trust — assume every layer will eventually fail, and design accordingly.
What I’d Do Differently #
If I were starting over:
- Cloudflare Tunnel from day one. I spent months with open ports and Cloudflare proxy before discovering Tunnel. The reduction in attack surface is dramatic and the setup is simpler than managing firewall rules.
- Terraform from the start. Manual DNS management through dashboards doesn’t scale. Once you have more than one domain, it’s worth the initial investment to codify everything.
- Tighter Tailscale ACLs. I started with a permissive
"*"rule and tightened over time. Starting tight and loosening is safer than starting loose and tightening.
Wrap Up #
The combination of Cloudflare (DNS, Tunnel, Access, WAF) and Tailscale (mesh VPN, ACLs) gives you an enterprise-grade zero trust architecture for the cost of a few hours of setup. Most of Cloudflare’s features are free, and Tailscale’s free tier covers up to 100 devices.
The old model of "everything behind the firewall is safe" doesn’t work anymore — it probably never did. Zero trust isn’t just a buzzword. It’s a fundamentally better way to think about network security: verify everything, trust nothing, and design for the assumption that every layer will eventually be compromised.
If you’re running self-hosted services with open ports and no authentication layer beyond the application itself, start with Cloudflare Tunnel. It’s the single highest-impact change you can make — zero open ports, automatic TLS, and DDoS protection in about 15 minutes of setup.
Thanks for reading!