The Internet is a dangerous place for applications. In fact, it is reasonable to say that you
couldn’t create a less secure system if you tried. It is anonymous, uncontrolled, always on,
and instantly accessible from anywhere. This is a world where every bad actor, cracker, script
kiddie, and scam artist is your neighbor, and it is stupendously difficult to deny them access to
your front door.
And those are just the human threats. Any one person can control hundreds or thousands
of distributed systems by means of scripting and techniques designed for clustered computing.
Automated systems that make network requests, sometimes called robots, can be operated
legitimately, as in the case of Google indexers or Akamai media proxies, but they can also be put
to nefarious ends. Distributed Denial of Service attacks are a crude form of this; more sophisticated
robots post advertisements on message boards, index prices across a wide range of e-commerce
sites, or hijack processing cycles and bandwidth from other systems.
Despite the protection we apply in terms of firewalls and spam filters, the Internet remains
a hostile environment. TCP/IP is insecure by design, and intentionally so. Any system between
you and a network server can read and modify the packets you send. In some cases, as with
Network Address Translation, they’re supposed to. In other cases—firewall content filtering
comes to mind—the ability to change the payload of packets lies outside of any specification or
guidelines. And the problem isn’t limited to modification by intermediaries. Packets can be
arbitrarily generated to look as though they come from somewhere else.