Maximum Security:
A Hacker's Guide to Protecting Your Internet Site and Network
24
Security Concepts
On a quiet fall evening not so long ago, the Internet was forever changed. That
change took only minutes. If you have been reading this book from cover to cover,
you will remember the date in question. However, for readers absorbing this book
selectively, I will reiterate. That date was November 2, 1988. Shortly before dusk,
a worm was unleashed on the network. Within hours, this worm incapacitated many machines
(reportedly over 1,000 of them) and interrupted or otherwise degraded the performance
of thousands more. (Many of these machines or networks were key research centers
engaged in defense-related study.) At the exact moment that the worm was released,
the history and future of the Internet changed forever. No one knew it at the time,
because it would take a full year in the aftermath to assess what an enormous impact
the incident had. But be assured of this: The change occurred in the same instant
that Morris released his code to the Network.
Since that time, security has gained almost a cult status. Individuals I know
who have never had a clue about the subject are suddenly diving for security information.
You hear it in restaurants all the time. As you are eating your lunch, the buzz floats
overhead: firewall, router, packet filtering, e-mail bombing, hackers, crackers...the
list is long indeed. (This book would never have been written if the climate weren't
just so.) By now, most people know that the Internet is insecure, but few know exactly
why. Not surprisingly, those very same people are concerned, because most of them
intend to implement some form of commerce on the Internet. It is within this climate
that Internet Voodoo has arisen, conjured by marketeers from the dark chaos that
looms over the Net and its commercial future.
Marketing folks capitalize on ignorance--that's a fact. I know resellers today
who sell 8MB SIMMs for $180 and get away with it. However, while technical consultants
do often overcharge their customers, there is probably no area where this activity
is more prominent than in the security field. This should be no surprise; security
is an obscure subject. Customers are not in a position to argue about prices, techniques,
and so forth because they know nothing about the subject. This is the current climate,
which offers unscrupulous individuals a chance to rake in the dough. (And they are,
at an alarming rate.)
The purpose of this chapter, then, is to offer advice for individuals and small
businesses. I cannot guarantee that this is the best advice, but I can guarantee
that it is from experience. Naturally, everyone's experience is different, but I
believe that I am reasonably qualified to offer some insight into the subject. That
said, let's begin.
How Security Concepts Can Influence Your Choices
First, I want to quickly examine security concepts and how they will influence
your choices of a security consultant. To begin with, know this: "There is nothing
new under the sun." This quote is a brilliant statement made by William Shakespeare.
It is brilliant because, in literature that preceded his own, for thousands of years,
the statement had already been made. Therefore, he used a redundancy to articulate
redundancy. How does this relate to Internet security? Read on.
The truth is, TCP/IP has been around for a long, long time. For example, as I
reported in Chapter 18, "Novell," NetWare had fully functional TCP/IP built
into its operating system back in 1991. UNIX has had it for far longer. So there
is no real problem here. The knowledge is available out there in the void.
The greater majority of security breaches stem from human error. (That is because
crackers with limited knowledge can easily cut deep into systems that are erroneously
configured. On more carefully configured networks, 90 percent of these self-proclaimed
"super crackers" couldn't get the time of day from their target.)
These human errors generally occur from lack of experience. The techniques to
protect an Internet server have not significantly changed over the past few years.
If a system administrator or security administrator fails to catch this or that hole,
he needs to bone up on his advisories.
NOTE: I will readily admit that some techniques
have been improved, largely by the academic community and not so much by commercial
vendors. Commercial vendors are usually slightly behind the academic communities,
perhaps by a few months or so. Examples of this might include the development of
automated tools to screen your system for known security holes. Many of these are
written by students or by freelance software developers. These tools certainly streamline
the process of checking for holes, but the holes are commonly known to any security
administrator worth his salt.
So, before you haul off and spend thousands (or even tens of thousands) of dollars
on a security consult, there are some things that you should consider. Here are a
couple test questions:
- Suppose you establish a sacrificial machine, a Macintosh running WebStar and
no other TCP/IP servers. The machine is isolated from your network, it has no valuable
data on it, and basically, it has no inroad to your internal network. Your network
does not run TCP/IP, and none of the publicly accessible nodes perform IP forwarding
in any case. Would you pay a security consultant to scan that Web server box? (Instead
of either having your system administrator scan it or not scan it at all.) If so,
why?
- You want to co-locate a box at an ISP. You normally work with Microsoft Windows
NT (and so does your internal system administrator). Nevertheless, the ISP is trying
to convince you to use a SPARC 20 and is willing to sell you one (or lease you one)
for fair market value. Do you do it? If so, why?
The correct answer to both of these questions is "probably not." Here
are the reasons why:
- Scenario 1: What would the consultant be scanning for? Because the machine is
running no other services but HTTP over WebStar, most modern scanners would render
a laundry list of "connection refused" and "server not reachable"
messages. In other words, the scan would be a complete waste of time and money because
no services exist on the machine. Scanners like those discussed in Chapter 9, "Scanners,"
are used only to attack full-fledged TCP/IP implementations, where services (including
NFS and other protocols) are either available and misconfigured or available and
not configured at all. The question is, would you or your internal system administrator
know this? If not, you might get taken.
- Scenario 2: Why would you agree to place your Web server in the hands of a company
on which you will remain totally dependent? If neither you nor your staff knows UNIX,
insist on an NT box. If the provider balks, find another. Commonly, the ISP staff
might forward the explanation that they feel UNIX is more secure and they therefore
cannot tolerate an NT box on their Ethernet. If you agree to their terms, you will
either be dependent upon them for all maintenance and programming or you will have
to pay good money to train your system administrator in UNIX.
There are literally hundreds of such scenarios. In each, there is an opportunity
for you to get hustled. A security consult is not to be taken lightly. Neither is
the management of your co-located box. Remember that your Web server (wherever it
might be located) is something that can be viewed (and attacked) by the entire world.
Before you can make an educated choice of a security consultant, you need to be
familiar with basic security principles. That's what this chapter is really all about.
About Remote Security Consults
There is a new phenomenon emerging on the Internet. Security consults are now
being done (although perhaps not in great number) from remote locations. This is
where someone in the same city (or another city) tests, defines, and ultimately implements
your security from the outside. In other words, it is done from a location other
than your offices or home. I have a couple points to make regarding this type of
procedure:
- Scan or penetration testing is commonly done from a remote location. The purpose
of penetration testing (at the end of the day) is to simulate a real-time attack
from the void. There is no replacement for doing this from a remote location. In
this limited area of concern, at least, analysis from a remote location is warranted
and reasonable.
- All other forms of security testing and implementation should be done onsite.
Implementing security from a remote location is not a secure method and may result
in security breaches. As much as the idea may seem attractive to you, I would strongly
advise against having any firm or individual handle your security from a remote location.
If your network is large and is meant to be as secure as possible, even the existence
of a privileged user who can gain remote access to do maintenance work is a security
risk. (For example, why would one cut a hole through a firewall just for the convenience
of off-site work?)
NOTE: As an example, an individual on
the East Coast recently posted an article in Usenet requesting bids on a security
consult. I contacted that party to discuss the matter, mainly out of curiosity. Within
three hours, the party forwarded to me his topology, identifying which machines had
firewalls running, what machines were running IP forwarding, and so forth.
Granted, this individual was simply looking for bids, but he forwarded this type
of sensitive information to me, an individual he had neither seen nor heard of before.
Moreover, if he had done more research, he would have determined that my real name
was unobtainable from either my e-mail address, my Web page, or even my provider.
Were it not for the fact that I was on great terms with my then-current provider,
he [the provider] would not even know my name. So, the person on the East Coast forwarded
extremely sensitive information to an unknown source--information that could have
resulted in the compromise of his network.
So, point one is this: Other than penetration testing, all active, hands-on security
procedures should be undertaken at your place of business or wherever the network
is located. Do not forward information to a potential consultant over the Internet,
do not hire someone sight unseen, and finally, do not contract a consultant whose
expertise cannot be in some way verified.
Security Through Obscurity
If a security consultant explains to you (or your system administration staff)
that one or two holes do exist but that it is extremely unlikely that they can be
exploited, carefully consider his explanation. Interrogate him as to what "extremely
unlikely" means and why he thinks the contingency is just so.
If his explanation is that the level of technical expertise required is highly
advanced, this is still not a valid reason to let it slide, particularly if there
are currently no known solutions to the problem. If there are options, take them.
Never assume (or allow a consultant to assume) that because a hole is obscure or
difficult to exploit that it is okay to allow that hole to exist.
Only several months ago, it was theorized that a Java applet could not access
a client's hard disk drive. That has since been proven to be false. The argument
initially supporting the "impossibility" of the task was this: The programming
skill required was not typically a level attained by most crackers. That was patently
incorrect. Crackers spend many hours trying to determine new holes (or new ways of
implementing old ones). With the introduction of new technologies, such as Java and
ActiveX, there is no telling how far a cracker could take a certain technique.
Security through obscurity was once a sound philosophy. Many years ago, when the
average computer user had little knowledge of his own operating system (let alone
knowledge of multiple operating systems), the security-through-obscurity approach
tended to work out. Things were more or less managed on a need-to-know basis. The
problem with security through obscurity, however, becomes more obvious on closer
examination. It breaks down to matters of trust.
In the old days, when security through obscurity was practiced religiously, it
required that certain users know information about the system; for example, where
passwords were located and what special characters had to be typed at the prompt.
It was common, actually, for a machine, upon connection, to issue a rather cryptic
prompt. (Perhaps this can be likened to the prompt one might have received as a Delphi
user just a few years ago.) This prompt was expecting a series of commands, including
the carrier service, the terminal emulation, and so on. Until these variables were
entered correctly (with some valid response, of which there were many), nothing would
happen. For example, if the wrong string was entered, a simple ? would appear.
A hacker coming across such a system would naturally be intrigued, but he could spend
many hours (if not weeks) typing in commands that would fail. (Although the command
HELP seems to be a pretty universal way to get information on almost any
system.)
Things changed when more experienced users began distributing information about
systems. As more and more information leaked out, more sophisticated methods of breaching
security were developed. For example, it was shortly after the first release of internal
procedures in CBI (the Equifax credit-reporting system) that commercial-grade software
packages were developed to facilitate a breaking and entering into that famous computerized
consumer credit bureau. These efforts finally culminated with the introduction of
a tool called CBIHACK that automated most of the effort behind cracking Equifax.
Today, it is common for users to know several operating systems in at least a
fleeting way. More importantly, however, information about systems security has been
so widely disseminated that at this stage, even those starting their career in cracking
know where password files are located, how authentication is accomplished, and so
forth. As such, security through obscurity is now no longer available as a valid
stance, nor should it be, especially for one insidious element of it--the fact that
for it to work at all, humans must be trusted with information. For example, even
when this philosophy had some value, one or more individuals with an instant need-to-know
might later become liabilities. Disgruntled employees are historically well known
to be in this category. As insiders, they would typically know things about a system
(procedures, logins, passwords, and so forth). That knowledge made the security inherently
flawed from the start.
It is for these reasons that many authentication procedures are now automated.
In automated authentication procedures, the human being plays no part. Unfortunately,
however, as you will learn in Chapter 28, "Spoofing Attacks," even these
automated procedures are now suspect.
In any event, view with suspicion any proposal that a security hole (small though
it may be) should be left alone.
Choosing a Consultant
There are many considerations in choosing a security consultant. First, it is
not necessary that you contract one of the Big Ten firms (for example, Coopers and
Lybrand) to secure your network. If you are a small business, this is likely cost
prohibitive. Also, it is overkill. These firms typically take big contracts for networks
that harbor hundreds (or in WANs, thousands) of machines.
If you are a small firm and cannot afford to invest a lot of money in security,
you may have to choose more carefully. However, your consultant should meet at least
all the following requirements:
- He should be local.
- He should have at least four years experience as a system administrator (or apprentice
administrator) on your platform. (If some of that experience was in a university,
that is just fine.)
- He should have a solid reputation.
- Generally, he should not have a criminal record.
- He should have verifiable references.
Why Local?
Your consultant should be local because you will need to have him available on
a regular basis. Also, as I've noted, remote administration of a network is just
not a wise thing.
Experience
You notice that I say that university experience will suffice, so long as it does
not comprise the totality of the consultant's security education. Why? Because the
academic community is probably the closest to the cutting edge of security. If you
thumb through this book and examine the references, you will notice that the majority
of serious security papers were authored by those in the academic community. In fact,
even many of the so-called commercial white papers cited within this book were also
authored by students--students who graduated and started security firms.
Reputation
I suggest that your consultant should have a solid reputation, but I want to qualify
that. There are two points to be made here, one of which I made at the beginning
of this book. Just because former clients of a consultant have not experienced security
breaches does not necessarily mean that the consultant's reputation is solid. As
I have said, many so-called security spe- cialists conduct their "evaluation"
knowing that they have left the system vulnerable. In this scenario, the individual
knows a little something about security, but just enough to leave his clients in
a vulnerable situation with a false sense of security. Technically, a totally unprotected
network could survive unharmed for months on the Internet so long as crackers don't
stumble across it.
It would be good if you could verify that your potential consultant had been involved
in monitoring and perhaps plugging an actual breach. Good examples are situations
where he may have been involved in an investigation of a criminal trespass or other
network violation.
Equally, past experience working for an ISP is always a plus.
Criminal Record
Background checks are intrusive. I realize that. However, consider what you are
undertaking. Most smaller businesses today would be paralyzed if their data were
suddenly corrupted or unusable. If yours is such a business, and your potential consultant
is not an established firm, I would seriously consider a background check. However,
the existence of a criminal record (especially if that record is for computer-related
crimes) does not necessarily preclude the individual as a candidate. Much depends
upon the time that has passed since the conviction, the circumstances of the case,
and so forth. For example, I would hire Randall Schwartz without thinking twice.
His technical skills are well known.
Your Network
There are several ways you can view security, but I prefer the simple approach
and that approach is this: Your network is your home. Consider that for a moment.
Try to visualize your network as an extension of yourself. I realize that this sounds
a bit esoteric, but it really isn't. You can more easily grasp what I am driving
at by considering this: What type of data is on your network? I will wager that I
can tell you what's there. Yes; I will bet that only the most unimportant things
are on your network--things like your financial information, your identity, your
thoughts, your feelings, your personal reflections, your business...your life.
Would you let the world walk through the front door of your home? Would you let
complete strangers rifle through your drawers, looking for personal documents or
financial statements? Of course not. Then why would you let someone do it over a
network? The answer is: You wouldn't. The problem is, computers seem relatively benign,
so benign that we may forget how powerful their technology really is.
Software vendors want us to rush to the Internet. The more we use the network,
the more software they can sell. In this marketing frenzy, they attempt to minimize
some fairly serious problems out there. The truth is, the Internet is not secure
and will continue to exist in this state of insecurity for some time to come. This
is especially so because many of the networking products used in the future will
be based on the Microsoft platform.
Admittedly, Microsoft makes some of the finest software in the world. Security,
however, has not been its particular area of expertise. Its Internet operating system
is going to be NT--that's a fact. That is also where the majority of Microsoft's
security efforts are being concentrated, and it has made some significant advances.
However, in the more than 20 years that UNIX has been in existence, it has never
been completely secure. This is an important point: UNIX is a system that was designed--almost
from its beginning--as an operating system for use on the Internet. It was what the
Defense Department chose as the platform to develop ARPAnet. The people who designed
it are among the most talented (and technically minded) software engineers on the
planet. And even after all this, UNIX is not secure. We should expect, then, that
Windows NT will take some time to get the bugs out.
So, in closing on this subject, I relate this: Your network is your home. It is
worthy of protection, and that protection costs money. Which brings us to the next
issue...
Cost
How much should security cost? It depends on what type of network you have. If
your network is large and heterogeneous, those conditions are going to increase the
cost. It is important that you understand why, because when you go to the table to
negotiate a security package, you need to know what you are talking about.
The Homogenous Network
If you currently have a homogenous network, you should see a break in cost. Here
is why: Each operating system implements TCP/IP just slightly differently than the
rest, at least at the application level. Each operating system also has one or more
additional or proprietary protocols that aren't available on other systems (or that
can be available, but only with special software). For example, Windows 95 uses the
SMB protocol, which is not widely available in default installations of every operating
system. Certainly, there are clients available; one of them is SAMBA, which runs
on Linux and perhaps on other operating systems. Because each operating system is
different but all machines running the same operating system are basically the same,
a security consult of a homogenous network is less intensive than one that harbors
many different platforms. It should therefore cost less.
While this is true, it does not mean that you can get a homogenous network secured
for next to nothing. In most instances, it is not possible for security attributes
to simply be cloned or replicated on all workstations within the network. Various
security issues may develop. Some of those involve topology, as I have explained
in other chapters and will again discuss here.
We know that a network segment is a closed area; almost like a network within
itself. We also know that spoofing beyond that network segment is almost impossible.
(Almost.) The more network segments your network is divided up into, the more secure
your network will be. (Ideally, each machine would be hardwired to a router. This
would entirely eliminate the possibility of IP spoofing, but it is obviously cost
prohibitive.) Where you make those divisions will depend upon a close assessment
of risk, which will be determined between your technical staff and the consultant.
For each segment, you will incur further cost, not only for the consultant's services
but for the hardware (and possibly for software).
The Heterogeneous Network
If you have a network comprised of many different platforms, the problem of securing
it becomes more complex. Here's an example, again using SAMBA as a focal point. In
certain situations, passwords are revealed when using SAMBA in traffic between UNIX
and Windows 95 boxes. The more protocols you have running and the more third-party
software from different vendors (on different platforms) you have, the more complicated
your security assessment will be.
Certainly, even from a practical standpoint, there are immediate problems. First,
due largely to the division between the PC and workstation worlds, the security consultants
you contract may be unfamiliar with one of more of the platforms within your network,
and they may need to call outside help for them. Also, and this is no small consideration,
your consultants may ultimately be forced to provide at least a small portion of
proprietary code: their own. If this subject crops up, it should be discussed thoroughly.
There is a good chance that you can save at least some cost by having these consultants
tie together existing security packages, using their own code as the glue. This is
not nearly as precarious as it sounds. It may involve nothing more than redirecting
the output of log files or other, ongoing processes to plain text (or some other
form suitable for scanning by a program on another platform).
The problem with hiring toolsmiths of this sort is that you may find your security
dependent upon them. If your local system administrator is not familiar with the
code they used, you may have to rely on the consultants to come for second and third
visits. To guard against this, you should ensure good communications between your
personnel and the security team. This is a bit harder than it seems.
First, you have to recognize at least this: Your system administrator is God on
the network. That network is his domain, and he probably takes exceptional pride
in maintaining it. (I have seen some extraordinary things done by system administrators--truly
commercial-grade applications running, custom interfaces, and so forth.) When an
outside team comes to examine your system administrator's backyard, no matter what
they say, the experience feels a little intrusive. Diplomacy is really an important
factor. Remember: The consultants will leave, but you have to live with your system
administrator on a daily basis.
The General Process
Before you contact any firm and have them come to your offices (or home, I suppose),
you need to gather some information on a few things, including the following:
- Hardware. This should identify the make, manufacturer, model, and series of each
workstation, hub, router, network adapter, and so forth. Ideally, you should also
have a list of how much memory is in each machine, the capacity of the disk drives,
and the specs of your Ethernet. (For example, 10Base-T or whatever.)
- Software. All types of network software that you intend to run, and their version
numbers.
- Protocols. The protocols you are now running (or plan to run in the future).
Try to prioritize these. For example, if there is a single machine that simply must
run NFS, highlight that. Also, report the type of connectivity that you currently
have.
- Scope. The maximum number of workstations you plan to run, where they are located,
where the network segments exist, where you plan to expand, and any curiosities that
might be relevant. (For example, that you have older, legacy Novell NetWare servers
running in one office. If these are sufficiently old, they may traffic unencrypted
passwords. Your consultant will need to know that. Don't let something like that
crop up later.)
Next, you will need to gather a little model of your company's trust system. That
is, you will need to have your system administrator devise some easy listing method
to peruse privileges. This will identify what each user or workstation requires in
the way of privileges. It might be worth outputting this not only in text format,
but also in some graphical representation. On certain platforms, this type of software
is available, but it is quite expensive. It is probably better (for small firms trying
to save money) if this is done using some technical drawing package (such as Visio).
This information should be bound together. (There are copying services that will
bind such a folder, such as Kinko's Copies, or perhaps you have in-house facilities
that can do this.) Each section should be separated by a tab that identifies that
section. Contained within this folder should also be the following items:
- A statement from the system administrator about the security of the system. This
should include any special considerations, including whether special software has
been written, what type of security utilities are now being used, which ones could
not be used, and why.
- A statement of what type of security policies have been enforced within your
network, a history of any security breaches that you may have had, and so forth.
This compilation of information should be handed over to the security consultants
only after you have verified their reputation, because once it is in their hands,
they will know more about your network than your system administrator did just one
week before. However, it is important to collect the information, and here is why:
If you don't do it, the security consulting firm will. That will cost a lot of money.
Moreover, it will entail them having to disrupt daily activities even further than
they already have to while implementing solutions.
The next step may or may not be within your budget, but if it is, I would strongly
recommend it. Locate two separate security firms known to have good reputations.
(Even if they are in a different state; it doesn't matter.) Ask those firms what
it would cost to examine the information and make a recommendation, a kind of mock
bid. Included within their summaries should be a report of how such a job would be
implemented if they were doing it. This will not only serve as an index for what
the probable cost and effort would be, but also may alert you or your system administrator
to special issues, issues particular to your precise configuration. That having been
done, you can begin your search for a good, local source.
Degrees of Security
There are different ways that you can implement security. There is no law saying
that you have to connect your entire network to the Internet. (Although I see a fair
number of businesses doing it.) One simple way to reduce your cost is to create only
a very limited segment that has connectivity. If your primary concern is receiving
customer feedback (and providing some promotional information), there really is no
need to connect at all. Certainly, an ISP can host a page (or even co-locate a box)
for you.
However, if you are determined to provide dedicated access, with a server under
your local control, there are some things you can do to greatly increase security.
First, if the only box you are placing out on the freeway is a Web server (and you
are concerned about that server being cracked), you can use read-only media. This
procedure is admittedly more difficult to implement than a live file system (one
that is read/write), but the gains you realize in security are immense. Under such
a scenario, even if a cracker gains root access, there is very little that he can
do. The downside to this, of course, is that dynamic pages cannot be built on-the-fly,
but if you are providing an auto-quote generator or some similar facility (perhaps
even interfacing with a database), it can still be done.
Really, the key is to enclose all CGI into a restricted area. The CGI programs
read the data on the read-only media and generate a resulting page. This is a very
secure method of providing technical support, product lists, and prices to clients
in the void. Essentially, so long as you back up your CGI, you could have that identical
machine up in one hour or less, even if crackers did manage to crash it. This type
of arrangement is good for those who are only providing information. It is poor for
(and inapplicable to) those seeking to accept information. If you are accepting information,
this might involve a combination of secure HTML packages or protocols, where the
information received is written to removable, write-one-time media.
The sacrificial host is really the safest choice. This is a host that is expressly
out in the open and that you expect to be cracked. Certainly, this is far preferable
to having any portion of your internal network connected to the Internet. However,
if you also want your local employees or users to be able to access the Net, this
is entirely impractical. It can, however, be implemented where you do not expect
much access from the inside out, particularly in commerce situations.
A commerce situation is one where you are accepting credit card numbers over a
browser interface. Be very careful about how you implement such schemes. Here is
why: There are various paths you can take and some of them represent a greater risk
than others. Typically, you want to avoid (at any reasonable cost) storing your customers'
credit card numbers on any server connected to the network. (You have already seen
the controversy that developed after it was learned that Kevin Mitnik had acquired
credit card numbers--reportedly 20,000-- from the drives of Netcom.)
Generally, where you are accepting credit card numbers over the Internet, you
will also be clearing them over the network. This typically requires the assistance
of an outside service. There are various ways that this is implemented, although
two techniques dominate that market.
Local Saves
In a local save scenario, the information is piped through some secure, encrypted
HTTP session (SHTTP, for example). Usually, this is done through a form written specifically
for that purpose. The form outputs the information to a local disk somewhere, from
which it can later be retrieved for verification purposes. Along that journey from
the input form to the disk, the numbers may be sent through several processes. One
is where the numbers are examined against a common algorithm that determines (first
and foremost) whether the submitted credit card number is even a real one. By real,
I mean that it is a potentially real one. This is one somewhat flawed version of
verification. It basically relies on the same algorithms that are used to generate
card numbers to begin with. If the submitted number fails to result in a number that
could have been generated by the algorithms, the card number is a dreamt-up number,
something that someone randomly guessed. There are two flaws with this type of verification,
one in the basic concept and the other in reference to security.
The first problem is this: The algorithms used are now widely disseminated. That
is, there are credit card number generators available across the Internet that will
resolve numbers to either a state of authenticity or no authenticity. Kids used them
for years to circumvent the security of Internet service providers.
TIP: One very good example is utilities
that exist for unlawfully accessing AOL. These utilities have, embedded within their
design, automatic generators that produce a laundry list of card numbers that will
be interpreted as valid. When these programs first emerged, the credit card number
generators were primitive and available as support utilities. As using generators
of this variety became more common, however, these utilities were incorporated into
the code of the same application performing the dial-up and sign-on. The utilities
would pop up a window list from which the cracker could choose a number. This number
would be sent (usually by the SendKeys function in VB) to the registration
form of the provider.
So, at the start, individuals could come forward with at least mathematically
sound numbers for submission. Thus, simple algorithm credit card validation subjects
the accepting party to a significant amount of risk. For example, if this verification
is used in the short run but the cards are later subjected to real verification,
the interim period comprises the longest time during which the accepting party will
lose goods or services as a result of a fraudulent charge. If this period is extended
(and the temporary approval of such a credit card number grants the issuer access
to ongoing services), then technically, the accepting party is losing money for every
day that the credit card is not actually validated.
Secondly, and perhaps more importantly, storing the numbers on a local drive could
prove a fatal option. You are then relying upon the security of your server to protect
the data of your clientele. This is not good. If the information is ultimately captured,
intercepted, or otherwise obtained, potentially thousands (or even hundreds of thousands)
of dollars might be at stake. If there is a subsequent investigation (which there
usually is), it will ultimately come out that the seed source for the numbers was
your hard disk drives. In other words, after the Secret Service (or other investigating
party) has determined that all victims shared only one common denominator (using
your service), you will have a problem.
This is especially true if your system administrator fails to detect the breach
and the breach is then an ongoing, chronic problem. There is a certain level at which
this could raise legal liability for your company. This has not really been tested
in the courts, but I feel certain that within the next few years, special legislation
will be introduced that will address the problem. The unfortunate part of this is
as follows: Such a case would rely heavily on expert testimony. Because this is a
gray area (the idea of what "negligent" system administration is, if such
a thing can exist), lawyers will be able to harangue ISPs and other Internet services
into settling these cases, even if only in an effort to avoid sizable legal bills.
By this, I mean that they could "shake down" the target by saying "I
will cost you $50,000.00 in legal bills. Is it worth the trouble to defend?"
If the target is a large firm, its counsel will laugh this off and proceed to bury
the plaintiff's counsel in paperwork and technical jargon. However, if the target
is a small firm (perhaps hiring a local defense firm that does not specialize in
Internet law), a legal challenge could be enormously expensive and a drain on resources.
If you have to choose, try to saddle some third party with the majority of the liability.
In other words, don't store those numbers on your drives if you can help it.
Remote Saves via CGI
The second scenario may or may not be preferable. This is where you drop a secure
HTML form into the structure of your Web site. (This form is provided by the credit
card clearing service.) With this, you will likely also receive customized scripts
that redirect the data submitted in that form to a remote server. That remote server
fulfills one purpose only: clearing the numbers.
NOTE: There are various methods through
which the mechanics of this process are achieved. One is where the credit card clearing
company has proprietary software that attaches to a particular port. On both the
client and the server end, this port traffics the information (which is encrypted
before it leaves the client and decrypted after the arrival at the server). More
than likely, the remote server refuses connections on almost all other ports, or
the information is filtered through a pinhole in a firewall.
The advantages and disadvantages are diverse in this scenario. First, there is
the obvious problem that the accepting party is resigned to traveling blind; that
is, they will never have the credit card information within their possession. Because
of this, disputed claims are a serious headache.
Here's an example: A kid gets his parent's credit card number and charges up a
storm. This information is validated by the remote server, with the accepting party
storing no information. Later, the parent disputes the transaction, claiming that
he never authorized such a charge. This is okay, and may happen periodically. However,
obtaining records and then sorting out that dispute is both a logistical and legal
problem. It is not quite as simple as disputing unauthorized charges on one's telephone
bill. Because the party that cleared (and ultimately collected on) the charge is
a third party (one that has no part in the exchange of goods or services), confusion
can easily develop.
Imagine now if you were such a victim. You contact the party that is the apparent
recipient of the charge, only to find that the company has "nothing to do with
it." When consumers are confronted with this type of situation, they become
less likely to do commerce over the Net. And while this is essentially no different
than being confronted with unauthorized 900- number charges on your telephone bill,
the average consumer will view the Internet with increasing suspicion. This is bad
for Internet commerce generally. Despite that fact, however, this method is generally
regarded as the most secure.
The Overall Picture of Net Commerce
Here is the challenge for Internet commerce consultants, another variable to figure
in before creating a viable package. For example, one might be designing a "total
solution" package involving co-location of a box, Web development, security,
and credit card clearing. Making such a package can be a difficult task. Your choices
must be carefully considered.
Naturally, there is also the issue of cost. Most clearing companies take a piece
of the action, which means that they charge a percentage for each charge cleared.
Sometimes there are variations on this theme, but there are basically two scenarios.
In the first, they charge a sizable sum for setup and request no further money from
the client, instead reaping their percentage from the credit card companies at the
other end. Another is where the initial cost is lower, but the client is charged
a percentage on each transaction. Still another, although less common, is where the
middleman company may take a smaller percentage from both sides, thereby distributing
the load and making their pricing seem more competitive to both client and credit
card company.
There are many services you can contract, including both consultant firms and
actual software and hardware solution vendors. Here are a few:
In closing on the issue, I would suggest that you read at least a few of the following
white papers, articles, or technical reports. Some are more difficult to find than
others, and I would suggest that you take those papers for which I have provided
no online address and run them through a university library bibliography search.
Many of them are available through services like WorldCat and Uncover.
Credit Card Transactions: Real World and Online. Keith Lamond. 1996.
Digital Money Online. A Review of Some Existing Technologies. Dr. Andreas Schöter
and Rachel Willmer. Intertrader Ltd. February 1997.
Millions of Consumers to Use Internet Banking. Booz, Allen & Hamilton Study
Indicates.
A Bibliography of Electronic Payment Information.
Electronic Cash, Tokens and Payments in the National Information Infrastructure.
Electronic Commerce in the NII.
A Framework for Global Electronic Commerce. Clinton Administration. For
an executive summary, visit
For the complete report, visit
Card Europe UK--Background Paper. Smartcard Technology Leading To Multi Service
Capability.
Electronic Payment Schemes. Dr. Phillip M. Hallam-Baker. World Wide Web
Consortium.
Generic Extensions of WWW Browsers. Ralf Hauser and Michael Steiner. First
Usenix Workshop on Electronic Commerce. July 1995.
Anonymous Delivery of Goods in Electronic Commerce. Ralf Hauser and Gene
Tsudik. IBMTDB, 39(3), pp. 363-366. March 1996.
On Shopping Incognito. R. Hauser and G. Tsudik. Second Usenix Workshop
on Electronic Commerce. November 1996.
The Law of Electronic Commerce. EDI, Fax and Email: Technology, Proof and Liability.
B. Wright. Little, Brown and Company. 1991.
Fast, Automatic Checking of Security Protocols. D. Kindred and J. M. Wing.
Second Usenix Workshop on Electronic Commerce, pp. 41-52. November 1996.
Electronic Commerce on the Internet. Robert Neches, Anna-Lena Neches, Paul
Postel, Jay M. Tenenbaum, and Robert Frank. 1994.
NetBill Security and Transaction Protocol. Benjamin Cox, J. D. Tygar, and
Marvin Sirbu. First Usenix Workshop on Electronic Commerce. July 1995.
CyberCash Credit Card Protocol. Donald E. Eastlake, Brian Boesch, Steve
Crocker, and Magdalena Yesil. Version 0.8. July 1995. (Internet Draft.)
Commerce on the Internet--Credit Card Payment Applications over the Internet.
Taher Elgamal. July 1995.
Business, Electronic Commerce and Security. B. Israelsohn. 1996.
Summary
Be prepared. If you plan to establish a dedicated connection to the Internet and
security is an important issue for you, it is wise to learn the terrain. I am not
suggesting that security specialists are unscrupulous; I am simply warning you of
potential pitfalls in the security process. By gathering knowledge about your network,
your trust models, and Internet security in general, you will fare far better. It's
a jungle out there; you better believe it.
© Copyright, Macmillan Computer Publishing. All
rights reserved.
|