Neural DownloadNEURAL DOWNLOAD
← cd ../blog

How DNS Actually Works

Every step of a DNS query visualized — from your browser's cache to root servers and back, in under 7 minutes.

Share
>

Neural Download

Installing mental model for dns.

You type google.com and hit Enter. In the next hundred milliseconds, your computer starts a chain of conversations with servers it has never met. And any one of them could lie to you.

But first, here's the thing. Your browser doesn't know where google.com lives. Not on its own. So it asks your operating system: do you know?

Your OS checks its own cache — a short-term memory of recent lookups. If you visited google.com thirty seconds ago, the answer's already there. Done. No network traffic.

But if the cache is empty, your OS does something interesting. It sends the question to a server called a recursive resolver. Think of it as a detective. You hand it a name, and it goes out into the world and finds the address. Your ISP runs one. Google runs one at eight eight eight eight. Cloudflare runs one at one one one one.

The resolver does the legwork. It goes out into the internet, bouncing between servers until it finds the answer. Your machine just waits.

So where does the detective go first?

The resolver starts at the top of a tree. The absolute top. The root.

There are thirteen root server addresses. Every resolver ships with them built in. When your resolver has no idea where to start, it asks a root server: where is google.com?

The root doesn't know. But it knows who's responsible for dot com. It hands back a referral: go ask the dot com TLD server.

TLD stands for top-level domain. There's one set of servers for dot com, another for dot org, another for dot net. The dot com servers alone handle over a hundred billion queries a day.

The resolver follows the referral and asks the dot com server: where is google.com? The TLD server doesn't know either. But it knows the authoritative name server for Google. Another referral.

Finally, the resolver reaches Google's authoritative server. This is the source of truth. It replies with an IP address — a set of numbers that tells your browser exactly where to connect.

The resolver hands this back to your OS, your OS hands it to your browser. Total time: maybe eighty milliseconds.

But here's the myth. People say there are only thirteen root servers. That's not quite true. There are thirteen addresses. But behind those addresses are over nineteen hundred physical machines, spread across every continent. They use a trick called anycast — the same IP address routes to whichever physical server the network considers closest. Your query usually hits a root server nearby, not one on the other side of the world.

Why only thirteen addresses? Because in the early days of DNS, the list of root servers had to fit inside a single UDP response. With the overhead of headers and record data, thirteen names was about as many as you could squeeze in. That practical constraint from the nineteen eighties still shapes the internet today.

This system handles trillions of queries a day. So it better be fast.

And it is fast — because almost nothing actually goes through those four hops. The secret is caching. Aggressive, layered caching at every level.

When the resolver gets Google's IP address, it doesn't throw it away. It stores it. For how long? The answer itself says. Every DNS record carries a number called the TTL — time to live. It tells the resolver: you can keep this answer for this many seconds. A typical TTL might be three hundred seconds — five minutes.

For those five minutes, anyone using that resolver who asks for google.com gets the cached answer. Instantly. No root server. No TLD. No authoritative server. Just memory.

Your operating system caches too. Your browser caches too. By the time a query actually reaches a root server, it's passed through three layers of cache that all said: I don't know.

This is why DNS feels instant. The system is designed so that the expensive four-hop lookup almost never happens.

But caching has a consequence. When someone changes a DNS record — moves a website to a new server — the old answer is still cached everywhere. You've heard people say DNS changes take twenty four to forty eight hours to propagate. That's not propagation. It's expiration. Caches around the world are holding onto the old answer until their TTL runs out. There's no broadcast. No notification. Just patience.

And DNS doesn't just cache answers. It caches failures too. If a domain doesn't exist, the resolver remembers that. It's called negative caching. Even the absence of an answer gets stored so the system doesn't waste time asking again.

But there's a problem. Speed requires trust. And DNS was designed to trust everyone.

In two thousand eight, a security researcher named Dan Kaminsky found a flaw that could redirect any website on the internet. Banks. Email. Government sites. Everything.

Here's how it worked.

When a resolver sends a query, it includes a transaction ID — a number that matches the response to the request. The problem? That ID is only sixteen bits. Sixty five thousand five hundred thirty six possible values.

Kaminsky realized you could race the real server. Send your resolver a query for a random subdomain — something like seven seven seven dot example dot com. The authoritative server starts preparing the real answer. But the attacker floods the resolver with thousands of forged responses, each guessing a different transaction ID.

If one of those guesses is right — and with sixty five thousand options, it doesn't take long — the resolver accepts the fake answer. But here's the devastating part. The forged response doesn't just answer the subdomain. It includes a poisoned delegation record that says: by the way, the name server for ALL of example dot com is now the attacker's server.

One successful guess, and the attacker controls every name under that domain.

Kaminsky kept this secret. He quietly contacted every major DNS vendor — Microsoft, Cisco, the BIND maintainers, all of them — and coordinated a massive simultaneous patch. On July eighth, two thousand eight, they all released fixes at once. The solution: randomize the source port on every query. Instead of guessing from sixty five thousand options, an attacker now faces billions of combinations.

The patch was deployed. The internet survived. And that fix is still running today — right now — every time your browser loads a page.

But Kaminsky's bug revealed something deeper. DNS was never designed for a hostile internet. It was built in nineteen eighty three, when the entire network was a few hundred machines that all trusted each other. The patches helped. Later, DNSSEC added cryptographic signatures so resolvers can verify that responses haven't been tampered with. The system keeps getting stronger.

Now you know how it actually works.

Cognitive architecture... updated.