Maximum Security:
A Hacker's Guide to Protecting Your Internet Site and Network
4
Just Who Can Be Hacked, Anyway?
The Internet was born in 1969. Almost immediately after the network was established,
researchers were confronted with a disturbing fact: The Internet was not secure and
could easily be cracked. Today, writers try to minimize this fact, reminding you
that the security technologies of the time were primitive. This has little bearing.
Today, security technology is quite complex and the Internet is still easily cracked.
I would like to return to those early days of the Internet. Not only will this
give you a flavor of the time, it will demonstrate an important point: The Internet
is no more secure today than it was twenty years ago.
My evidence begins with a document: a Request for Comments, or RFC.
Before you review the document, let me explain what the RFC system is about. This
is important because I refer to many RFC documents throughout this book.
The Request For Comments (RFC) System
Requests for Comments (RFC) documents are special. They are written (and posted
to the Net) by individuals engaged in the development or maintenance of the Internet.
RFC documents serve the important purpose of requesting Internet-wide comments on
new or developing technology. Most often, RFC documents contain proposed standards.
The RFC system is one of evolution. The author of an RFC posts the document to
the Internet, proposing a standard that he or she would like to see adopted network-wide.
The author then waits for feedback from other sources. The document (after more comments/changes
have been made) goes to draft or directly to Internet standard status. Comments and
changes are made by working groups of the Internet Engineering Task Force (IETF).
Cross Reference: The Internet Engineering
Task Force (IETF) is "... a large, open, international community of network
designers, operators, vendors, and researchers concerned with the evolution of the
Internet architecture and the smooth operation of the Internet." To learn more
about the IETF, go to its home page at http://www.ietf.cnri.reston.va.us/.
RFC documents are numbered sequentially (the higher the number, the more recent
the document) and are distributed at various servers on the Internet.
Cross Reference: One central server from
which to retrieve RFC documents is at http://ds0.internic.net/ds/dspg0intdoc.html.
This address (URL) is located at InterNIC, or the Network Information Center.
InterNIC
InterNIC provides comprehensive databases on networking information. These databases
contain the larger portion of collected knowledge on the design and scope of the
Internet. Some of those databases include
- The WHOIS Database--This database contains all the names and network numbers
of hosts (or machines) permanently connected to the Internet in the United States
(except *.mil addresses, which must be obtained at nic.ddn.mil).
- The Directory of Directories--This is a massive listing of nearly all resources
on the Internet, broken into categories.
- The RFC Index--This is a collection of all RFC documents.
Cross Reference: All these documents are
centrally available at http://rs.internic.net.
A Holiday Message
As I mentioned earlier, I refer here to an early RFC. The document in question
is RFC 602: The Stockings Were Hung by the Chimney with Care. RFC 602 was
posted by Bob Metcalfe in December, 1973. The subject matter concerned weak passwords.
In it, Metcalfe writes: The ARPA Computer Network is susceptible to security violations
for at least the three following reasons:
- 1. Individual sites, used to physical limitations on machine access, have
not yet taken sufficient precautions toward securing their systems against unauthorized
remote use. For example, many people still use passwords which are easy to guess:
their fist [sic] names, their initials, their host name spelled backwards, a string
of characters which are easy to type in sequence (such as ZXCVBNM).
2. The TIP allows access to the ARPANET to a much wider audience than is thought
or intended. TIP phone numbers are posted, like those scribbled hastily on the walls
of phone booths and men's rooms. The TIP required no user identification before giving
service. Thus, many people, including those who used to spend their time ripping
off Ma Bell, get access to our stockings in a most anonymous way.
3. There is lingering affection for the challenge of breaking someone's system.
This affection lingers despite the fact that everyone knows that it's easy to break
systems, even easier to crash them.
All of this would be quite humorous and cause for raucous eye winking and elbow
nudging, if it weren't for the fact that in recent weeks at least two major serving
hosts were crashed under suspicious circumstances by people who knew what they were
risking; on yet a third system, the system wheel password was compromised--by two
high school students in Los Angeles no less. We suspect that the number of dangerous
security violations is larger than any of us know is growing. You are advised not
to sit "in hope that Saint Nicholas would soon be there." That document
was posted well over 20 years ago. Naturally, this password problem is no longer
an issue. Or is it? Examine this excerpt from a Defense Data Network Security Bulletin,
written in 1993:
- Host Administrators must assure that passwords are kept secret by their users.
Host Administrators must also assure that passwords are robust enough to thwart exhaustive
attack by password cracking mechanisms, changed periodically and that password files
are adequately protected. Passwords should be changed at least annually.
Take notice. In the more than 25 years of the Internet's existence, it has never
been secure. That's a fact. Later in this book, I will try to explain why. For now,
however, I confine our inquiry to a narrow question: Just who can be cracked?
The short answer is this: As long as a person maintains a connection to the Internet
(permanent or otherwise), he or she can be cracked. Before treating this subject
in depth, however, I want to define cracked.
What Is Meant by the Term Cracked?
For our purposes, cracked refers to that condition in which the victim
network has suffered an unauthorized intrusion. There are various degrees of this
condition, each of which is discussed at length within this book. Here, I offer a
few examples of this cracked condition:
- The intruder gains access and nothing more (access being defined as simple
entry; entry that is unauthorized on a network that requires--at a minimum--a login
and password).
- The intruder gains access and destroys, corrupts, or otherwise alters data.
- The intruder gains access and seizes control of a compartmentalized portion of
the system or the whole system, perhaps denying access even to privileged users.
- The intruder does NOT gain access, but instead implements malicious procedures
that cause that network to fail, reboot, hang, or otherwise manifest an inoperable
condition, either permanently or temporarily.
To be fair, modern security techniques have made cracking more difficult. However,
the gorge between the word difficult and the word impossible is wide
indeed. Today, crackers have access to (and often study religiously) a wealth of
security information, much of which is freely available on the Internet. The balance
of knowledge between these individuals and bona-fide security specialists is not
greatly disproportionate. In fact, that gap is closing each day.
The purpose of this chapter is to show you that cracking is a common activity:
so common that assurances from anyone that the Internet is secure should be
viewed with extreme suspicion. To drive that point home, I will begin with governmental
entities. After all, defense and intelligence agencies form the basis of our national
security infrastructure. They, more than any other group, must be secure.
Government
Throughout the Internet's history, government sites have been popular targets
among crackers. This is due primarily to press coverage that follows such an event.
Crackers enjoy any media attention they can get. Hence, their philosophy is generally
this: If you're going to crack a site, crack one that matters.
Are crackers making headway in compromising our nation's most secure networks?
Absolutely. To find evidence that government systems are susceptible to attack, one
needn't look far. A recent report filed by the Government Accounting Office (GAO)
concerning the security of the nation's defense networks concluded that:
- Defense may have been attacked as many as 250,000 times last year...In addition,
in testing its systems, DISA attacks and successfully penetrates Defense systems
65 percent of the time. According to Defense officials, attackers have obtained and
corrupted sensitive information--they have stolen, modified, and destroyed both data
and software. They have installed unwanted files and "back doors" which
circumvent normal system protection and allow attackers unauthorized access in the
future. They have shut down and crashed entire systems and networks, denying service
to users who depend on automated systems to help meet critical missions. Numerous
Defense functions have been adversely affected, including weapons and supercomputer
research, logistics, finance, procurement, personnel management, military health,
and payroll.1
1Information Security: Computer Attacks at Department
of Defense Pose Increasing Risks (Chapter Report, 05/22/96, GAO/AIMD-96-84);
Chapter 0:3.2, Paragraph 1.
-
Cross Reference: Information Security:
Computer Attacks at Department of Defense Pose Increasing Risks is available
online at http://www.securitymanagement.com/library/000215.html.
That same report revealed that although more than one quarter of a million attacks
occur annually, only 1 in 500 attacks are actually detected and reported. (Note that
these sites are defense oriented and therefore implement more stringent security
policies than many commercial sites. Many government sites employ secure operating
systems that also feature advanced, proprietary security utilities.)
Government agencies, mindful of the public confidence, understandably try to minimize
these issues. But some of the incidents are difficult to obscure. For example, in
1994, crackers gained carte-blanche access to a weapons-research laboratory in Rome,
New York. Over a two-day period, the crackers downloaded vital national security
information, including wartime- communication protocols.
Such information is extremely sensitive and, if used improperly, could jeopardize
the lives of American service personnel. If crackers with relatively modest equipment
can access such information, hostile foreign governments (with ample computing power)
could access even more.
SATAN and Other Tools
Today, government sites are cracked with increasing frequency. The authors of
the GAO report attribute this largely to the rise of user-friendly security programs
(such as SATAN). SATAN is a powerful scanner program that automatically detects
security weaknesses in remote hosts. It was released freely on the Net in April,
1995. Its authors, Dan Farmer and Weitse Venema, are legends in Internet security.
(You will learn more about these two gentlemen in Chapter 9, "Scanners.")
Because SATAN is conveniently operated through an HTML browser (such as Netscape
Navigator or NCSA Mosaic), a cracker requires less practical knowledge of systems.
Instead, he or she simply points, clicks, and waits for an alert that SATAN has found
a vulnerable system (at least this is what the GAO report suggests). Is it true?
No. Rather, the government is making excuses for its own shoddy security. Here
is why: First, SATAN runs only on UNIX platforms. Traditionally, such platforms required
expensive workstation hardware. Workstation hardware of this class is extremely specialized
and isn't sold at the neighborhood Circuit City store. However, those quick to defend
the government make the point that free versions of UNIX now exist for the IBM-compatible
platform. One such distribution is a popular operating system named Linux.
Linux is a true 32-bit, multi-user, multi-tasking, UNIX-like operating system.
It is a powerful computing environment and, when installed on the average PC, grants
the user an enormous amount of authority, particularly in the context of the Internet.
For example, Linux distributions now come stocked with every manner of server ever
created for TCP/IP transport over the Net.
Cross Reference: Linux runs on a wide
range of platforms, not just IBM compatibles. Some of those platforms include the
Motorola 68k, the Digital Alpha, the Motorola PowerPC, and even the Sun Microsystems
SPARC architecture. If you want to learn more about Linux, go to the ultimate Linux
page at http://www.linux.org/.
Distributions of Linux are freely available for download from the Net, or can
be obtained at any local bookstore. CD-ROM distributions are usually bundled with
books that instruct users on using Linux. In this way, vendors can make money on
an otherwise, ostensibly free operating system. The average Linux book containing
a Linux installation CD-ROM sells for forty dollars.
Furthermore, most Linux distributions come with extensive development tools. These
include a multitude of language compilers and interpreters:
- A C language compiler
- A C++ language compiler
- A SmallTalk interpreter
- A BASIC interpreter
- A Perl interpreter
- Tools for FORTRAN
- Tools for Pascal
- A common LISP interpreter
Yet, even given these facts, the average kid with little knowledge of UNIX cannot
implement a tool such as SATAN on a Linux platform. Such tools rarely come prebuilt
in binary form. The majority are distributed as source code, which may then be compiled
with options specific to the current platform. Thus, if you are working in AIX (IBM's
proprietary version of UNIX), the program must be compiled for AIX. If working in
Ultrix (DEC), it must be compiled for Ultrix, and so on.
NOTE: A port was available for Linux not
long after SATAN was released. However, the bugs were not completely eliminated and
the process of installing and running SATAN would still remain an elusive and frustrating
experience for many Linux users. The process of developing an easily implemented
port was slow in coming.
Most PC users (without UNIX experience) are hopelessly lost even at the time of
the Linux installation. UNIX conventions are drastically different from those in
DOS. Thus, before a new Linux user becomes even moderately proficient, a year of
use will likely pass. This year will be spent learning how to use MIT's X Window
System, how to configure TCP/IP settings, how to get properly connected to the Internet,
and how to unpack software packages that come in basic source-code form.
Even after the year has passed, the user may still not be able to use SATAN. The
SATAN distribution doesn't compile well on the Linux platform. For it to work, the
user must have installed the very latest version of Perl. Only very recent Linux
distributions (those released within one year of the publishing of this book) are
likely to have such a version installed. Thus, the user must also know how to find,
retrieve, unpack, and properly install Perl.
In short, the distance between a non-UNIX literate PC user and one who effectively
uses SATAN is very long indeed. Furthermore, during that journey from the former
to the latter, the user must have ample time (and a brutal resolve) to learn. This
is not the type of journey made by someone who wants to point and click his or her
way to super-cracker status. It is a journey undertaken by someone deeply fascinated
by operating systems, security, and the Internet in general.
So the government's assertion that SATAN, an excellent tool designed expressly
to improve Internet security, has contributed to point-and-click cracking is unfounded.
True, SATAN will perform automated scans for a user. Nonetheless, that user must
have strong knowledge of Internet security, UNIX, and several programming languages.
There are also collateral issues regarding the machine and connection type. For
example, even if the user is seasoned, he or she must still have adequate hardware
power to use SATAN effectively.
Cross Reference: You will examine SATAN
(and programs like it) in greater detail in Chapter 9. In that chapter, you will
be familiarized with many scanners, how they work, how they are designed, and the
type of information they can provide for users.
SATAN is not the problem with government sites. Indeed, SATAN is not the only
diagnostic tool that can automatically identify security holes in a system. There
are dozens of such tools available:
- Internet Security Scanner (ISS)
- Strobe
- Network Security Scanner (NSS)
- identTCPscan
- Jakal
Chapter 9 examines these automated tools and their methods of operation. For now,
I will simply say this: These tools operate by attacking the available TCP/IP services
and ports open and running on remote systems.
Whether available to a limited class of users or worldwide, these tools share
one common attribute: They check for known holes. That is, they check for security
vulnerabilities that are commonly recognized within the security community. The chief
value of such tools is their capability to automate the process of checking one or
more machines (hundreds of machines, if the user so wishes). These tools accomplish
nothing more than a knowledgeable cracker might by hand. They simply automate the
process.
Education and Awareness About Security
The problem is not that such tools exist, but that education about security is
poor. Moreover, the defense information networks are operating with archaic internal
security policies. These policies prevent (rather than promote) security. To demonstrate
why, I want to refer to the GAO report I mentioned previously. In it, the government
concedes:
- ...The military services and Defense agencies have issued a number of information
security policies, but they are dated, inconsistent and incomplete...
The report points to a series of Defense Directives as examples. It cites (as
the most significant DoD policy document) Defense Directive 5200.28. This document,
Security Requirements for Automated Information Systems, is dated March 21,
1988.
In order to demonstrate the real problem here, let's examine a portion of that
Defense Directive. Paragraph 5 of Section D of that document is written as follows:
- Computer security features of commercially produced products and Government-developed
or -derived products shall be evaluated (as requested) for designation as trusted
computer products for inclusion on the Evaluated Products List (EPL). Evaluated products
shall be designated as meeting security criteria maintained by the National Computer
Security Center (NCSC) at NSA defined by the security division, class, and feature
(e.g., B, B1, access control) described in DoD 5200.28-STD (reference (K)).
Cross Reference: Security Requirements
for Automated Information Systems is available on the Internet at http://140.229.1.16:9000/htdocs/teinfo/directives/soft/5200.28.html
It is within the provisions of that paragraph that the government's main problem
lies. The Evaluated Products List (EPL) is a list of products that have been evaluated
for security ratings, based on DoD guidelines. (The National Security Agency actually
oversees the evaluation.) Products on the list can have various levels of security
certification. For example, Windows NT version 3.51 has obtained a certification
of C2. This is a very limited security certification.
Cross Reference: Before you continue,
you should probably briefly view the EPL for yourself. Check it out at http://www.radium.ncsc.mil/tpep/epl/index.html.
The first thing you will notice about this list is that most of the products are
old. For example, examine the EPL listing for Trusted Information Systems' Trusted
XENIX, a UNIX-based operating system.
Cross Reference: The listing for Trusted
XENIX can be found at http://www.radium.ncsc.mil/tpep/epl/entries/CSC-EPL-92-001-A.html
If you examine the listing closely, you will be astonished. TIS Trusted XENIX
is indeed on the EPL. It is therefore endorsed and cleared as a safe system, one
that meets the government's guidelines (as of September 1993). However, examine even
more closely the platforms on which this product has been cleared. Here are a few:
- AST 386/25 and Premium 386/33
- HP Vectra 386
- NCR PC386sx
- Zenith Z-386/33
These architectures are ancient. They are so old that no one would actually
use them, except perhaps as a garage hacking project on a nice Sunday afternoon (or
perhaps if they were legacy systems that housed software or other data that was irreplaceable).
In other words, by the time products reach the EPL, they are often pathetically obsolete.
(The evaluation process is lengthy and expensive not only for the vendor, but for
the American people, who are footing the bill for all this.) Therefore, you can conclude
that much of the DoD's equipment, software, and security procedures are likewise
obsolete.
Now, add the question of internal education. Are Defense personnel trained in
(and implementing) the latest security techniques? No. Again, quoting the GAO report:
- Defense officials generally agreed that user awareness training was needed, but
stated that installation commanders do not always understand computer security risk
and thus, do not always devote sufficient resources to the problem.
High-Profile Cases
Lack of awareness is pervasive, extending far beyond the confines of a few isolated
Defense sites. It is a problem that affects many federal agencies throughout the
country. Evidence of it routinely appears on the front pages of our nation's most
popular newspapers. Indeed, some very high-profile government sites were cracked
in 1996, including the Central Intelligence Agency (CIA) and the Department of Justice
(DoJ).
- In the CIA case, a cracker seized control on September 18, 1996, replacing the
welcome banner with one that read The Central Stupidity Agency. Accompanying
this were links to a hacker group in Scandinavia.
Cross Reference: To see the CIA site in
its hacked state, visit http://www.skeeve.net/cia/.
NOTE: skeeve.net was one of many
sites that preserved the hacked CIA page, primarily for historical purposes. It is
reported that after skeeve.net put the hacked CIA page out for display,
its server received hundreds of hits from government sites, including the CIA. Some
of these hits involved finger queries and other snooping utilities.
- In the DoJ incident (Saturday, August 17, 1996), a photograph of Adolf Hitler
was offered as the Attorney General of the United States.
Cross Reference: The DoJ site, in its hacked
state, can be viewed at http://river-city.clever.net/hacked/doj/.
As of this writing, neither case has been solved; most likely, neither will ever
be. Both are reportedly being investigated by the FBI.
Typically, government officials characterize such incidents as rare. Just how
rare are they? Not very. In the last year, many such incidents have transpired:
- During a period spanning from July, 1995 to March 1996, a student in Argentina
compromised key sites in the United States, including those maintained by the Armed
Forces and NASA.
- In August, 1996, a soldier at Fort Bragg reportedly compromised an "impenetrable"
military computer system and widely distributed passwords he obtained.
- In December, 1996, hackers seized control of a United States Air Force site,
replacing the site's defense statistics with pornography. The Pentagon's networked
site, DefenseLINK, was shut down for more than 24 hours as a result.
The phenomenon was not limited to federal agencies. In October, 1996, the home
page of the Florida State Supreme Court was cracked. Prior to its cracking, the page's
intended use was to distribute information about the court, including text reproductions
of recent court decisions. The crackers removed this information and replaced it
with pornography. Ironically, the Court subsequently reported an unusually high rate
of hits.
In 1996 alone, at least six high-profile government sites were cracked. Two of
these (the CIA and FBI) were organizations responsible for maintaining departments
for information warfare or computer crime. Both are charged with one or more facets
of national security. What does all this mean? Is our national security going down
the tubes? It depends on how you look at it.
In the CIA and FBI cases, the cracking activity was insignificant. Neither server
held valuable information, and the only real damage was to the reputation of their
owners. However, the Rome, New York case was far more serious (as was the case at
Fort Bragg). Such cases demonstrate the potential for disaster.
There is a more frightening aspect to this: The sites mentioned previously were
WWW sites, which are highly visible to the public. Therefore, government agencies
cannot hide when their home pages have been cracked. But what about when the crack
involves some other portion of the targeted system (a portion generally unseen by
the public)? It's likely that when such a crack occurs, the press is not involved.
As such, there are probably many more government cracks that you will never hear
about.
To be fair, the U.S. government is trying to keep up with the times. In January
1997, a reporter for Computerworld magazine broke a major story concerning Pentagon
efforts to increase security. Apparently, the Department of Defense is going to establish
its own tiger team (a group of individuals whose sole purpose will be to attack
DoD computers). Such attacks will reveal key flaws in DoD security.
Other stories indicate that defense agencies have undertaken new and improved
technologies to protect computers holding data vital to national security. However,
as reported by Philip Shenon, a prominent technology writer for the New York Times:
- While the Pentagon is developing encryption devices that show promise in defeating
computer hackers, the accounting office, which is the investigative arm of Congress,
warned that none of the proposed technical solutions was foolproof, and that the
military's current security program was `dated, inconsistent and incomplete.'
The Pentagon's activity to develop devices that "show promise in defeating
computer hackers" appears reassuring. From this, one could reasonably infer
that something is being done about the problem. However, the reality and seriousness
of the situation is being heavily underplayed.
If Defense and other vital networks cannot defend against domestic attacks from
crackers, there is little likelihood that they can defend from hostile foreign powers.
I made this point earlier in the chapter, but now I want to expand on it.
Can the United States Protect the National Information
Infrastructure?
The United States cannot be matched by any nation for military power. We have
sufficient destructive power at our disposal to eliminate the entire human race.
So from a military standpoint, there is no comparison between the United States and
even a handful of third-world nations. The same is not true, however, in respect
to information warfare.
The introduction of advanced minicomputers has forever changed the balance of
power in information warfare. The average Pentium processor now selling at retail
computer chains throughout the country is more powerful than many mainframes were
five years ago (it is certainly many times faster). Add the porting of high-performance
UNIX-based operating systems to the IBM platform, and you have an entirely new environment.
A third-world nation could pose a significant threat to our national information
infrastructure. Using the tools described previously (and some high-speed connections),
a third-world nation could effectively wage a successful information warfare campaign
against the United States at costs well within their means. In fact, it is likely
that within the next few years, we'll experience incidents of bona-fide cyberterrorism.
To prepare for the future, more must be done than simply allocating funds. The
federal government must work closely with security organizations and corporate entities
to establish new and improved standards. If the new standards do not provide for
quicker and more efficient means of implementing security, we will be faced with
very dire circumstances.
Who Holds the Cards?
This (not legitimate security tools such as SATAN) is the problem: Thirty years
ago, the U.S. government held all the cards with respect to technology. The average
U.S. citizen held next to nothing. Today, the average American has access to very
advanced technology. In some instances, that technology is so advanced that it equals
technology currently possessed by the government. Encryption technology is a good
example.
Many Americans use encryption programs to protect their data from others. Some
of these encryption programs (such as the very famous utility PGP, created by Phil
Zimmermann) produce military-grade encryption. This level of encryption is sufficiently
strong that U.S. intelligence agencies cannot crack it (at least not within a reasonable
amount of time, and often, time is of the essence).
For example, suppose one individual sends a message to another person regarding
the date on which they will jointly blow up the United Nations building. Clearly,
time is of the essence. If U.S. intelligence officials cannot decipher this message
before the date of the event, they might as well have not cracked the message at
all.
This principle applies directly to Internet security. Security technology has
trickled down to the masses at an astonishing rate. Crackers (and other talented
programmers) have taken this technology and rapidly improved it. Meanwhile, the government
moves along more slowly, tied down by restrictive and archaic policies. This has
allowed the private sector to catch up (and even surpass) the government in some
fields of research.
This is a matter of national concern. Many grass-roots radical cracker organizations
are enthralled with these circumstances. They often heckle the government, taking
pleasure in the advanced knowledge that they possess. These are irresponsible forces
in the programming community, forces that carelessly perpetuate the weakening of
the national information infrastructure. Such forces should work to assist and enlighten
government agencies, but they often do not, and their reasons are sometimes understandable.
The government has, for many years, treated crackers and even hackers as criminals
of high order. As such, the government is unwilling to accept whatever valuable information
these folks have to offer. Communication between these opposing forces is almost
always negative. Bitter legal disputes have developed over the years. Indeed, some
very legitimate security specialists have lost time, money, and dignity at the hands
of the U.S. government. On more than one occasion, the government was entirely mistaken
and ruined (or otherwise seriously disrupted) the lives of law-abiding citizens.
In the next chapter, I will discuss a few such cases. Most arise out of the government's
poor understanding of the technology.
New paths of communication should be opened between the government and those in
possession of advanced knowledge. The Internet marginally assists in this process,
usually through devices such as mailing lists and Usenet. However, there is currently
no concerted effort to bring these opposing forces together on an official basis.
This is unfortunate because it fosters a situation where good minds in America remain
pitted against one another. Before we can effectively defend our national information
infrastructure, we must come to terms with this problem. For the moment, we are at
war with ourselves.
The Public Sector
I realize that a category such as the public sector might be easily misunderstood.
To prevent that, I want to identify the range of this category. Here, the public
sector refers to any entity that is not a government, an institution, or an individual.
Thus, I will be examining companies (public and private), Internet service providers,
organizations, or any other entity of commercial or semi-commercial character.
Before forging ahead, one point should be made: Commercial and other public entities
do not share the experience enjoyed by government sites. In other words, they have
not yet been cracked to pieces. Only in the past five years have commercial entities
flocked to the Internet. Therefore, some allowances must be made. It is unreasonable
to expect these folks to make their sites impenetrable. Many are smaller companies
and for a moment, I want to address these folks directly: You, more than any other
group, need to acquire sound security advice.
Small companies operate differently from large ones. For the little guy, cost
is almost always a strong consideration. When such firms establish an Internet presence,
they usually do so either by using in-house technical personnel or by recruiting
an Internet guru. In either case, they are probably buying quality programming talent.
However, what they are buying in terms of security may vary.
Large companies specializing in security charge a lot of money for their services.
Also, most of these specialize in UNIX security. So, small companies seeking to establish
an Internet presence may avoid established security firms. First, the cost is a significant
deterrent. Moreover, many small companies do not use UNIX. Instead, they may use
Novell NetWare, LANtastic, Windows NT, Windows 95, and so forth.
This leaves small businesses in a difficult position. They must either pay high
costs or take their programmers' word that the network will be secure. Because such
small businesses usually do not have personnel who are well educated in security,
they are at the mercy of the individual charged with developing the site. That can
be a very serious matter.
The problem is many "consultants" spuriously claim to know all about
security. They make these claims when, in fact, they may know little or nothing about
the subject. Typically, they have purchased a Web-development package, they generate
attractive Web pages, and know how to set up a server. Perhaps they have a limited
background in security, having scratched the surface. They take money from their
clients, rationalizing that there is only a very slim chance that their clients'
Web servers will get hacked. For most, this works out well. But although their clients'
servers never get hacked, the servers may remain indefinitely in a state of insecurity.
Commercial sites are also more likely to purchase one or two security products
and call it a day. They may pay several thousand dollars for an ostensibly secure
system and leave it at that, trusting everything to that single product.
For these reasons, commercial sites are routinely cracked, and this trend will
probably continue. Part of the problem is this: There is no real national standard
on security in the private sector. Hence, one most often qualifies as a security
specialist through hard experience and not by virtue of any formal education. It
is true that there are many courses available and even talks given by individuals
such as Farmer and Venema. These resources legitimately qualify an individual to
do security work. However, there is no single piece of paper that a company can demand
that will ensure the quality of the security they are getting.
Because these smaller businesses lack security knowledge, they become victims
of unscrupulous "security specialists." I hope that this trend will change,
but I predict that for now, it will only become more prevalent. I say this for one
reason: Despite the fact that many thousands of American businesses are now online,
this represents a mere fraction of commercial America. There are millions of businesses
that have yet to get connected. These millions are all new fish, and security charlatans
are lined up waiting to catch them.
The Public Sector Getting Cracked
In the last year, a series of commercial sites have come under attack. These attacks
have varied widely in technique. Earlier in this chapter, I defined some of those
techniques and the attending damage or interruption of service they cause. Here,
I want to look at cases that more definitively illustrate these techniques. Let's
start with the recent attack on Panix.com.
Panix.com
Panix.com (Public Access Networks Corporation) is a large Internet service provider
(ISP) that provides Internet access to several hundred thousand New York residents.
On September 6, 1996, Panix came under heavy attack from the void.
The Panix case was very significant because it demonstrates a technique known
as the Denial of Service (DoS) attack. This type of attack does
not involve an intruder gaining access. Instead, the cracker undertakes remote procedures
that render a portion (or sometimes all) of a target inoperable.
The techniques employed in such an attack are simple. As you will learn in Chapter
6, "A Brief Primer on TCP/IP," connections over the Internet are initiated
via a procedure called the three-part handshake. In this process, the requesting
machine sends a packet requesting connection. The target machine responds with an
acknowledgment. The requesting machine then returns its own acknowledgment and a
connection is established.
In a syn_flooder attack, the requesting (cracker's) machine sends a series of
connection requests but fails to acknowledge the target's response. Because the target
never receives that acknowledgment, it waits. If this process is repeated many times,
it renders the target's ports useless because the target is still waiting for the
response. These connection requests are dealt with sequentially; eventually, the
target will abandon waiting for each such acknowledgment. Nevertheless, if it receives
tens or even hundreds of these requests, the port will remain engaged until it has
processed--and discarded--each request.
NOTE: The term syn_flooder is derived
from the activity undertaken by such tools. The TCP/IP three-way handshake is initiated
when one machine sends another a SYN packet. In a typical flooding attack, a series
of these packets are forwarded to a target, purporting to be from an address that
is nonexistent. The target machine therefore cannot resolve the host. In any event,
by sending a flurry of these SYN packets, one is flooding the target with requests
that cannot be fulfilled.
Syn_flooder attacks are common, but do no real damage. They simply deny other
users access to the targeted ports temporarily. In the Panix case, though, temporarily
was a period lasting more than a week.
Syn_flooders are classified in this book as destructive devices. They are covered
extensively in Chapter 14, "Destructive Devices." These are typically small
programs consisting of two hundred lines of code or fewer. The majority are written
in the C programming language, but I know of at least one written in BASIC.
Crack dot Com
ISPs are popular targets for a variety of reasons. One reason is that crackers
use such targets as operating environments or a home base from which to launch attacks
on other targets. This technique assists in obscuring the identity of the attacker,
an issue we will discuss. However, DoS attacks are nothing special. They are the
modern equivalent of ringing someone's telephone repeatedly to keep the line perpetually
engaged. There are far more serious types of cracks out there. Just ask Crack dot
Com, the manufacturers of the now famous computer game Quake.
In January, 1997, crackers raided the Crack dot Com site. Reportedly, they cracked
the Web server and proceeded to chip away at the firewall from that location. After
breaking through the firewall, the crackers gained carte-blanche access to the internal
file server. From that location, they took the source code for both Quake
and a new project called Golgotha. They posted this source code on the Net.
NOTE: For those of you who are not programmers,
source code is the programming code of an application in its raw state. This
is most often in human-readable form, usually in plain English. After all testing
of the software is complete (and there are no bugs within it), this source code is
sent a final time through a compiler. Compilers interpret the source code and from
it fashion a binary file that can be executed on one or more platforms. In short,
source code can be though of as the very building blocks of a program. In commercial
circles, source code is jealously guarded and aggressively proclaimed as proprietary
material. For someone to take that data from a server and post it indiscriminately
to the Internet is probably a programmer's worst nightmare.
For Crack dot Com, the event could have far-reaching consequences. For example,
it's possible that during the brief period that the code was posted on the Net, its
competitors may have obtained copies of (at least some of) the programming routines.
In fact, the crackers could have approached those competitors in an effort to profit
from their activities. This, however, is highly unlikely. The crackers' pattern of
activity suggests that they were kids. For example, after completing the crack, they
paraded their spoils on Internet Relay Chat. They also reportedly left behind a log
(a recording of someone's activity while connected to a given machine). The Crack
dot Com case highlights the seriousness of the problem, however.
Kriegsman Furs
Another interesting case is that of Kriegsman Furs of Greensborough, North Carolina.
This furrier's Web site was cracked by an animal-rights activist. The cracker left
behind a very strong message, which I have reproduced in part:
- Today's consumer is completely oblivious to what goes on in order for their product
to arrive at the mall for them to buy. It is time that the consumer be aware of what
goes on in many of today's big industries. Most importantly, the food industries.
For instance, dairy cows are injected with a chemical called BGH that is very harmful
to both humans and the cows. This chemical gives the cows bladder infections. This
makes the cows bleed and guess what? It goes straight in to your bowl of cereal.
Little does the consumer know, nor care. The same kind of thing goes on behind the
back of fur wearers. The chemicals that are used to process and produce the fur are
extremely bad for our earth. Not only that, but millions of animals are slaughtered
for fur and leather coats. I did this in order to wake up the blind consumers of
today. Know the facts.
Following this message were a series of links to animal-rights organizations and
resources.
Kevin Mitnik
Perhaps the most well-known case of the public sector being hacked, however, is
the 1994/1995 escapades of famed computer cracker Kevin Mitnik. Mitnik has been gaining
notoriety since his teens, when he cracked the North American Aerospace Defense Command
(NORAD). The timeline of his life is truly amazing, spanning some 15 years of cracking
telephone companies, defense sites, ISPs, and corporations. Briefly, some of Mitnik's
previous targets include
- Pacific Bell, a California telephone company
- The California Department of Motor Vehicles
- A Pentagon system
- The Santa Cruz Operation, a software vendor
- Digital Equipment Corporation
- TRW
On December 25, 1994, Mitnik reportedly cracked the computer network of Tsutomu
Shimomura, a security specialist at the San Diego Supercomputer Center. What followed
was a press fiasco that lasted for months. The case might not have been so significant
were it not for three factors:
- The target was a security specialist who had written special security tools not
available to the general public.
- The method employed in the break-in was extremely sophisticated and caused a
stir in security circles.
- The suspicion was, from the earliest phase of the case, that Mitnik (then a wanted
man) was involved in the break-in.
First, Shimomura, though never before particularly famous, was known in security
circles. He, more than anyone, should have been secure. The types of tools he was
reportedly developing would have been of extreme value to any cracker. Moreover,
Shimomura has an excellent grasp of Internet security. When he got caught with his
pants down (as it were), it was a shock to many individuals in security. Naturally,
it was also a delight to the cracker community. For some time afterward, the cracking
community was enthralled by the achievement, particularly because Shimomura had reportedly
assisted various federal agencies on security issues. Here, one of the government's
best security advisors had been cracked to pieces by a grass-roots outlaw (at least,
that was the hype surrounding the case).
Second, the technique used, now referred to as IP spoofing, was complex
and not often implemented. IP spoofing is significant because it relies on an exchange
that occurs between two machines at the system level. Normally, when a user attempts
to log in to a machine, he or she is issued a login prompt. When the user provides
a login ID, a password prompt is given. The user issues his or her password and logs
in (or, he or she gives a bad or incorrect password and does not log in). Thus, Internet
security breaches have traditionally revolved around getting a valid password, usually
by obtaining and cracking the main password file.
IP spoofing differs from this radically. Instead of attempting to interface with
the remote machine via the standard procedure of the login/password variety, the
IP-spoofing cracker employs a much more sophisticated method that relies in part
on trust. Trust is defined and referred to in this book (unless otherwise
expressly stated) as the "trust" that occurs between two machines that
identify themselves to one another via IP addresses.
In IP spoofing, a series of things must be performed before a successful break-in
can be accomplished:
- One must determine the trust relationships between machines on the target network.
- One must determine which of those trust relationships can be exploited (that
is, which of those machines is running an operating system susceptible to spoofing).
- One must exploit the hole.
(Be mindful that this brief description is bare bones. I treat this subject extensively
in its own chapter, Chapter 28, "Spoofing Attacks.")
In the attack, the target machine trusted the other. Whenever a login occurred
between these two machines, it was authenticated through an exchange of numbers.
This number exchange followed a forward/challenge scenario. In other words, one machine
would generate a number to which the other must answer (also with a number). The
key to the attack was to forge the address of the trusted machine and provide the
correct responses to the other machine's challenges. And, reportedly, that is exactly
what Mitnik did.
In this manner, privileged access is gained without ever passing a single password
or login ID over the network. All exchanges happen deep at the system level, a place
where humans nearly never interact with the operating system.
Curiously, although this technique has been lauded as new and innovative, it is
actually quite antiquated (or at least, the concept is quite antiquated).
It stems from a security paper written by Robert T. Morris in 1985 titled A Weakness
in the 4.2BSD UNIX TCP/IP Software. In this paper, Morris (then working for AT&T
Bell Laboratories) concisely details the ingredients to make such an attack successful.
Morris opens the paper with this statement:
- The 4.2 Berkeley Software Distribution of the UNIX operating system (4.2BSD for
short) features an extensive body of software based on the "TCP/IP" family
of protocols. In particular, each 4.2BSD system "trusts" some set of other
systems, allowing users logged into trusted systems to execute commands via a TCP/IP
network without supplying a password. These notes describe how the design of TCP/IP
and the 4.2BSD implementation allow users on untrusted and possibly very distant
hosts to masquerade as users on trusted hosts. Bell Labs has a growing TCP/IP network
connecting machines with varying security needs; perhaps steps should be taken to
reduce their vulnerability to each other.
Morris then proceeds to describe such an attack in detail, some ten years before
the first widely reported instance of such an attack had occurred. One wonders whether
Mitnik had seen this paper (or even had it sitting on his desk whilst the deed was
being done).
In any event, the break-in caused a stir. The following month, the New York
Times published an article about the attack. An investigation resulted, and Shimomura
was closely involved. Twenty days later, Shimomura and the FBI tracked Mitnik to
an apartment in North Carolina, the apparent source of the attack. The case made
national news for weeks as the authorities sorted out the evidence they found at
Mitnik's abode. Again, America's most celebrated computer outlaw was behind bars.
In my view, the case demonstrates an important point, the very same point we started
with at the beginning of this chapter: As long as they are connected to the Net,
anyone can be cracked. Shimomura is a hacker and a good one. He is rumored
to own 12 machines running a variety of operating systems. Moreover, Shimomura is
a talented telephone phreak (someone skilled in manipulating the technology
of the telephone system and cellular devices). In essence, he is a specialist in
security. If he fell victim to an attack of this nature, with all the tools at his
disposal, the average business Web site is wide open to assault over the Internet.
In defense of Shimomura: Many individuals in
security defend Shimomura. They earnestly argue that Shimomura had his site configured
to bait crackers. In Chapter 26, "Levels of Attack," you will learn that
Shimomura was at least marginally involved in implementing this kind of system in
conjunction with some folks at Bell Labs. However, this argument in Shimomura's defense
is questionable. For example, did he also intend to allow these purportedly inept
crackers to seize custom tools he had been developing? If not, the defensive argument
fails. Sensitive files were indeed seized from Shimomura's network. Evidence of these
files on the Internet is now sparse. No doubt, Shimomura has taken efforts to hunt
them down. Nevertheless, I have personally seen files that Mitnik reportedly seized
from many networks, including Netcom. Charles Platt, in his scathing review of Shimomura's
book Takedown, offers a little slice of reality:
Kevin Mitnick...at least he shows some irreverence, taunting Shimomura and trying
to puncture his pomposity. At one point, Mitnick bundles up all the data he copied
from Shimomura's computer and saves it onto the system at Netcom where he knows that
Shimomura will find it....Does Shimomura have any trouble maintaining his dignity
in the face of these pranks? No trouble at all. He writes: "This was getting
personal. ... none of us could believe how childish and inane it all sounded."
It is difficult to understand why Shimomura would allow crackers (coming randomly
from the void) to steal his hard work and excellent source code. My opinion (which
may be erroneous) is that Shimomura did indeed have his boxes configured to bait
crackers; he simply did not count on anyone cutting a hole through that baited box
to his internal network. In other words, I believe that Shimomura (who I readily
admit is a brilliant individual) got a little too confident. There should have been
no relationship of trust between the baited box and any other workstation.
Cross Reference: Charles Platt's critique
of Takedown, titled A Circumlocuitous review of Takedown by Tsutomu Shimomura
and John Markoff, can be found at http://rom.oit.gatech.edu/~willday/mitnick/takedown.review.html.
Summary
These cases are all food for thought. In the past 20 or so years, there have been
several thousand such cases (of which we are aware). The military claims that it
is attacked over 250,000 times a year. Estimates suggest it is penetrated better
than half of the time. It is likely that no site is entirely immune. (If such a site
exists, it is likely AT&T Bell Laboratories; it probably knows more about network
security than any other single organization on the Internet.)
All this having been established, I'd like to get you started. Before you can
understand how to hack (or crack), however, you must first know a bit about the network.
Part II of this book, "Understanding the Terrain," deals primarily with
the Internet's development and design.
© Copyright, Macmillan Computer Publishing. All
rights reserved.
|