While we harp on data and network security in today’s interconnected world, and today there is an entire armory of network security and vulnerability assessment tools available, here is a peek into the history of network security and how an accidental piece of code woke us up.
Dubbed the “Morris worm”, it was written by Robert Morris a graduate student at Cornell University and launched on Nov 2, 1988 from the computer systems of the Massachusetts Institute of Technology. In the few hours of it being launched, 1000s of computer systems were down, the Internet was clogged and rendered dysfunctional. In the words of Clifford Stoll, a systems administrator "I surveyed the network, and found that two thousand computers were infected within fifteen hours. These machines were dead in the water—useless until disinfected. And removing the virus often took two days". Stoll commented that the worm showed the danger of monoculture, as "If all the systems on the Arpanet ran Berkeley Unix, the virus would have disabled all fifty thousand of them”.
And what Morris was trying to do was something remarkably simple – he simply wanted to know how many computers were on the Internet. There was no statistics or information available and he did it in the most ingenious way possible – write a piece of code that would try and do a remote access to all the machines that a particular user has access to and replicate itself on the remote machines and report back to the mother node. He got his answer – about 6000 computers – at the cost of bringing down the Internet and giving the world a clarion call on what it means to be “hacked”.
The worm exploited some of the known vulnerabilities and remote access options that computer systems provided:
From the start, it would check which systems a user had remote access to, do a rsh to that machine, replicate itself and try and guess other users passwords using a standard dictionary attack. The results were spectacularly successful.
It worked around many of the safeguards that an operating system uses to limit a user/program using up resources – Unix would automatically “nice” a program that executed for several minutes, lowering its priority. The worm responded by killing itself and respawning, automatically upping its priority. Most network data transfer supported only ASCII transfers, so the worm copied itself in source code and complied it on the remote machine to start execution.
It had safeguards to limit itself, by checking if a computer already had the program running. It was programmed to replicate itself only one out seven times and an unfortunate bug in the code prevented this from happening, resulting in the same computer getting infected again and again, virtually bringing it to a standstill.
While Morris’s intentions were non-malicious and even harmless, and it eventually resulted in him being charged under the Computer Fraud and Abuse Act, 1986 and fined $10,000 and 3 years of probation.
Which brings us to today’s world of interconnected systems – we are over-reliant on our networks and systems working perfectly well, our data being safe and hacker-resistant. While tools and systems exist to check networks and systems against vulnerabilities, exploits tracked, advisories published, and warnings made out, we are still vulnerable in one aspect – weak passwords. Change it now!