Hardware and Software in the Enterprise

Hardware and Software in the Enterprise
CHAPTER 6
Hardware and Software in the Enterprise
Objectives
As a manager, you’ll face many decisions about using hardware and software to improve the performance of your firm. After completing this chapter, you will be able to answer the following questions:
1. What computer processing and storage capability does our organization need to handle its information and business transactions?
2. What arrangement of computers and computer processing would best benefit our organization?
3. What kinds of software and software tools do we need to run our business? What criteria should
we use to select our software technology?
4. Of what new software technologies should we be aware? How would they benefit our organization?
5. How should we acquire and manage the firm’s hardware and software assets?
|FOCUS ON THE FEATURES
CASE STUDIES
Opening case: Nor-Cargo Revamps Its IT
Infrastructure
Window on Management: The Case for Linux
Window on Technology: Application
Integration to the Rescue
Chapter-ending Case: Zurich North America
Hunts Down Its IT Assets
HANDS-ON PROBLEM SOLVING
Application Software Exercise:
Spreadsheet Exercise: Evaluating Hardware and
Software Options
Dirt Bikes U.S.A: Analyzing the Total Cost of
Ownership (TCO) of Desktop Software Assets
Electronic Business Project:
Planning and Budgeting for a Sales Conference
WEB ACTIVITIES
www.prenhall.com/laudon | Internet Connection:
Evaluating Computer Hardware Vendors | Interactive
Study Guide | Additional Case Studies | International
Resources
| Nor-Cargo Revamps Its IT Infrastructure
Nor-Cargo started out in 1920 as a
small cargo services company that ran
ships up and down the coast of
Norway. About 15 years ago, it began
expanding, acquiring dozens of other
firms. Today Nor-Cargo is the largest
transportation company in Norway,
with $4.3 billion in annual revenue. Nor-Cargo operates
from the Arctic Circle down to the European continent
with eight areas of business, including package transportation, air and sea freight, freight forwarding, and
third-party logistics.
This growth did not come without a price. NorCargo’s information technology (IT) infrastructure was
challenged by all of its acquisitions, each with their
own technology platforms; business processes; and
applications for payroll, accounting, and customs.
Lacking a centralized IT strategy, Nor-Cargo suffered
from operational inefficiency, redundant resources,
high software development costs, and limited coordination between all of its companies. Nor-Cargo had too
many different operating systems and too many servers
in a distributed environment, underutilizing its data
storage capacity by 50 percent. All of these disparate
systems forced Nor-Cargo to maintain separate information systems staff for many of the cities it serviced and
prevented the company from responding rapidly to market changes.
• Develop enterprisewide standards
Nor-Cargo management wanted to reduce the size of
its information systems workforce by 20 percent each
year. It also wanted a coordinated IT infrastructure that
would allow it to consolidate data in a single repository.
It used ECOstructure blueprints created by Cisco
Systems, EMC, and Oracle Corporations to provide secure
and new storage, networking, and software architectures
that could accommodate growing computing requirements in the future.
Consultants from Rubik Consultants performed a total
cost of ownership (TCO) analysis showing that Nor-Cargo
should move from a distributed architecture to a centralized one where it could standardize on one application
for each area of the company and take advantage of leading-edge logistics systems. The consultants recommended
Nor-Cargo adopt a standardized network and IT platform
as a common framework for its mission-critical systems.
Oracle provided an integrated e-business software platform accessible to Nor-Cargo employees, suppliers, and
customers through a Web interface. Cisco provided network security and management software and hardware.
After implementing the new infrastructure, Nor-Cargo
could aggregate data from multiple sources, giving managers the ability to analyze the data. For example, management can now examine truck traffic patterns and use
such data to make truck trips more cost efficient. NorCargo can quickly roll out new applications over the
infrastructure, including a Web-based bar code system
• Rapid expansion
• Decentralized IT
Infrastructure
• ECOStructure blueprint
• Cisco hardware &
software
• Oracle e-business
software
• Business processes
• Acquisitions
• Analyze companywide data
• Provide online
package tracking
• Roll out new applications
• Reduce costs
• Improve customer
service
000
191
192 Part II | Information Technology Infrastructure
that lets customers track packages. Even though NorCargo has not finished rolling out all of its applications,
it has realized savings from improved communications,
smaller information systems staff, and reductions in the
number of its servers from 150 to 50.
Sources: Cisco Systems, “Case Study: Nor-Cargo Brings All Ships
into One Port,” Maximizing the Data Center, 2003; Eric J. Adams,
“Streamline Delivery at Nor-Cargo,” Cisco IQ Magazine,
September/October 2002; and www.nor-cargo.no, accessed August
16, 2003.
Management Challenges
Nor-Cargo found that its efficiency and competitiveness were hampered by poorly managed hardware and
software technology that lacked the functionality for running
its business. The company found it could lower operating
costs and provide better services to customers by improving
the selection and management of its hardware and software.
In order to consolidate hardware and software assets and
select appropriate technology, Nor-Cargo’s management had
to understand the capabilities of computer hardware and software technology, how to select hardware and software to
meet current and future business requirements, and the
financial and business rationale for its hardware and software
investments. Computer hardware and software technologies
can improve organizational performance, but they raise the
following management challenges:
1. The centralization versus decentralization debate.
A long-standing issue among information system managers and CEOs has been the question of how much to
centralize or distribute computing resources. Should
processing power and data be distributed to departments
and divisions, or should they be concentrated at a single
location using a large central computer? Should organizations deliver application software to users over networks from a central location or allow users to maintain
software and data on their own desktop computers?
Client/server and peer-to-peer computing facilitate
decentralization, but network computers and mainframes support a centralized model. Which is the best
for the organization? Each organization will have a different answer based on its own needs. Managers need to
make sure that the computing model they select is compatible with organizational goals.
2. The application backlog. Advances in computer software have not kept pace with the breathtaking productivity gains in computer hardware. Developing software has
become a major preoccupation for organizations. A great
deal of software must be intricately crafted. Moreover, the
software itself is only one component of a complete information system that must be carefully designed and coordinated with organizational and hardware components. The
“software crisis” is actually part of a larger systems analysis,
design, and implementation issue, which is discussed in
detail later. Despite the gains from fourth-generation languages, personal desktop software tools, object-oriented
programming, and software tools for the Web, many businesses continue to face a backlog of two to three years in
developing the information systems they need, or they
may not be able to develop them at all.
A
lthough managers and business professionals do not need to be computer technology
experts, they should have a basic understanding of the role of hardware and software in the
organization’s information technology (IT) infrastructure so that they can make technology
decisions that promote organizational performance and productivity. This chapter surveys
the capabilities of computer hardware and computer software, and highlights the major
issues in the management of the firm’s hardware and software assets.
6.1 COMPUTER HARDWARE
INFRASTRUCTURE
AND
INFORMATION TECHNOLOGY
Computer hardware, which we defined in Chapter 1, provides the underlying physical foundation for the firm’s IT infrastructure. Other infrastructure components—software, data, and
networks—require computer hardware for their storage or operation.
computer
Physical device that takes data as an
input, transforms the data by executing stored instructions, and outputs
information to a number of devices.
The Computer System
A computer is a physical device that takes data as input, transforms these data according to
stored instructions, and outputs processed information. A contemporary computer system
Chapter 6 | Hardware and Software in the Enterprise 193
FIGURE 6-1
Hardware components of a computer system
Secondary Storage
• Magnetic disk
• Optical disk
• Magnetic tape
Communications
Devices
Central
Processing
Unit (CPU)
Input Devices
• Keyboard
• Computer mouse
• Touch screen
• Source data automation
Buses
Primary
Storage
Output Devices
• Printers
• Video display terminals
• Plotters
• Audio output
consists of a central processing unit, primary storage, secondary storage, input devices, output
devices, and communications devices (see Figure 6-1). The central processing unit manipulates raw data into a more useful form and controls the other parts of the computer system.
Primary storage temporarily stores data and program instructions during processing, whereas
secondary storage devices (magnetic and optical disks, magnetic tape) store data and programs when they are not being used in processing. Input devices, such as a keyboard or
mouse, convert data and instructions into electronic form for input into the computer.
Output devices, such as printers and video display terminals, convert electronic data produced by the computer system and display them in a form that people can understand.
Communications devices provide connections between the computer and communications
networks. Buses are circuitry paths for transmitting data and signals among the parts of the
computer system.
In order for information to flow through a computer system and be in a form suitable for
processing, all symbols, pictures, or words must be reduced to a string of binary digits. A
binary digit is called a bit and represents either a 0 or a 1. In the computer, the presence of
an electronic or magnetic signal means one, and its absence signifies zero. Digital computers operate directly with binary digits, either singly or strung together to form bytes. A string
of eight bits that the computer stores as a unit is called a byte. Each byte can be used to store
a decimal number, a symbol, a character, or part of a picture (see Figure 6-2).
The CPU and Primary Storage
The central processing unit (CPU) is the part of the computer system where the manipulation of symbols, numbers, and letters occurs; it controls the other parts of the computer system
(see Figure 6-3). Located near the CPU is primary storage (sometimes called primary memory or main memory), where data and program instructions are stored temporarily during processing. Buses provide pathways for transmitting data and signals between the CPU, primary
storage, and the other devices in the computer system. The characteristics of the CPU and
primary storage are very important in determining a computer’s speed and capabilities.
Figure 6-3 also shows that the CPU consists of an arithmetic-logic unit and a control unit.
The arithmetic-logic unit (ALU) performs the computer’s principal logical and arithmetic
operations. It adds, subtracts, multiplies, and divides, determining whether a number is positive, negative, or zero. In addition to performing arithmetic functions, an ALU must be able
to determine when one quantity is greater than or less than another and when two quantities
are equal. The ALU can perform logic operations on letters as well as numbers.
A contemporary computer system
can be categorized into six major
components. The central processing unit manipulates data and
controls the other parts of the
computer system; primary storage temporarily stores data and
program instructions during processing; secondary storage stores
data and instructions when they
are not used in processing; input
devices convert data and instructions for processing in the computer; output devices present
data in a form that people can
understand; and communications
devices control the passing of
information to and from communications networks.
bit
A binary digit representing the smallest unit of data in a computer system.
It can only have one of two states,
representing 0 or 1.
byte
A string of bits, usually eight, used to
store one number or character in a
computer system.
central processing unit (CPU)
Area of the computer system that
manipulates symbols, numbers, and
letters, and controls the other parts of
the computer system.
primary storage
Part of the computer that temporarily
stores program instructions and data
being used by the instructions.
arithmetic-logic unit (ALU)
Component of the CPU that performs the computer’s principal logic
and arithmetic operations.
194 Part II | Information Technology Infrastructure
FIGURE 6-2
Bits and bytes
Bits are represented by either a
0 or 1. A string of eight bits
constitutes a byte, which represents a character or number.
Illustrated here is a byte representing the letter “A” using the
ASCII binary coding standard.
control unit
Component of the CPU that controls
and coordinates the other parts of the
computer system.
machine cycle
Series of operations required to
process a single machine instruction.
RAM (random access memory)
Primary storage of data or program
instructions that can directly access
any randomly chosen location in the
same amount of time.
FIGURE 6-3
0
or
1
One bit
Characters are
represented by one
byte for each letter.
0 1 0 0 0 0 0 1
One byte for character A
The control unit coordinates and controls the other parts of the computer system. It reads
a stored program, one instruction at a time, and directs other components of the computer
system to perform the program’s required tasks. The series of operations required to process a
single machine instruction is called the machine cycle.
Primary storage has three functions. It stores all or part of the software program that is being
executed. Primary storage also stores the operating system programs that manage the operation of the computer (see Section 6.3). Finally, the primary storage area holds data that the
program is using. Internal primary storage is often called RAM, or random access memory. It
is called RAM because it can directly access any randomly chosen location in the same
amount of time.
Primary memory is divided into storage locations called bytes. Each location contains a
set of eight binary switches or devices, each of which can store one bit of information. The
set of eight bits found in each storage location is sufficient to store one letter, one digit, or one
special symbol (such as a $). Each byte has a unique address, similar to a mailbox, indicating
where it is located in RAM. The computer can remember where the data in all of the bytes
are located simply by keeping track of these addresses. Computer storage capacity is measured in bytes. Table 6-1 lists the primary measures of computer storage capacity and processing speed.
Primary storage is composed of semiconductors, which are integrated circuits made by
printing thousands and even millions of tiny transistors on small silicon chips. There are several different kinds of semiconductor memory used in primary storage. RAM is used for
short-term storage of data or program instructions. RAM is volatile: Its contents will be lost
when the computer’s electric supply is disrupted by a power outage or when the computer is
The CPU and primary storage
The CPU contains an arithmeticlogic unit and a control unit.
Data and instructions are stored
in unique addresses in primary
storage that the CPU can access
during processing. The data bus,
address bus, and control bus
transmit signals between the
central processing unit, primary
storage, and other devices in the
computer system.
Central Processing Unit (CPU)
Primary Storage
1
Arithmetic-Logic Unit
8
22+11=33
9<10
T
#
U
Control Unit
Data Bus
Address Bus
Control Bus
Input
Devices
Output
Devices
Secondary
Storage
4
Primary storage
address
Chapter 6 | Hardware and Software in the Enterprise 195
TABLE 6-1
Key Measures of Computer Storage Capacity and Processing Speed
STORAGE CAPACITY
Byte
Kilobyte
Megabyte
Gigabyte
Terabyte
String of eight bits
1,000 bytes (actually 1,024 storage positions)
1,000,000 bytes
1,000,000,000 bytes
1,000,000,000,000 bytes
PROCESSING SPEED
Microsecond
Nanosecond
Picosecond
MIPS
1/1,000,000 second
1/1,000,000,000 second
1/1,000,000,000,000 second
Millions of instructions per second
turned off. ROM, or read-only memory, can only be read from; it cannot be written to.
ROM chips come from the manufacturer with programs already burned in, or stored. ROM
is used in general-purpose computers to store important or frequently used programs.
ROM (read-only memory)
Semiconductor memory chips that
contain program instructions. These
chips can only be read from; they
cannot be written to.
Computer Processing
The processing capability of the CPU plays a large role in determining the amount of work
that a computer system can accomplish.
Microprocessors and Processing Power
Contemporary CPUs use semiconductor chips called microprocessors, which integrate all
of the memory, logic, and control circuits for an entire CPU onto a single chip. The speed
and performance of a computer’s microprocessors help determine a computer’s processing
power and are based on the number of bits that can be processed at one time (word length);
the amount of data that can be moved between the CPU, primary storage, and other devices
(data bus width); and cycle speed, measured in megahertz. (Megahertz is abbreviated MHz
and stands for millions of cycles per second.)
Microprocessors can be made faster by using reduced instruction set computing (RISC)
in their design. Conventional chips, based on complex instruction set computing, have several hundred or more instructions hardwired into their circuitry, and they may take several
clock cycles to execute a single instruction. If the little-used instructions are eliminated, the
remaining instructions can execute much faster. RISC computers have only the most frequently used instructions embedded in them. A RISC CPU can execute most instructions in
a single machine cycle and sometimes multiple instructions at the same time. RISC is often
used in scientific and workstation computing.
microprocessor
Very large-scale integrated circuit
technology that integrates the computer’s memory, logic, and control on
a single chip.
megahertz
A measure of cycle speed, or the pacing of events in a computer; one
megahertz equals one million cycles
per second.
reduced instruction set computing
(RISC)
Technology used to enhance the
speed of microprocessors by embedding only the most frequently used
instructions on a chip.
Parallel Processing
Processing can also be sped up by linking several processors to work simultaneously on the
same task. Figure 6-4 compares parallel processing to serial processing used in conventional
computers. In parallel processing, multiple processing units (CPUs) break down a problem
into smaller parts and work on it simultaneously. Getting a group of processors to attack the
same problem at once requires both rethinking the problems and special software that can
divide problems among different processors in the most efficient way possible, providing the
needed data, and reassembling the many subtasks to reach an appropriate solution.
Massively parallel computers have huge networks of processor chips interwoven in complex and flexible ways to attack large computing problems. As opposed to parallel processing,
where small numbers of powerful but expensive specialized chips are linked together, massively parallel machines link hundreds or even thousands of inexpensive, commonly used
chips to break problems into many small pieces and solve them.
parallel processing
Type of processing in which more
than one instruction can be processed
at a time by breaking down a problem into smaller parts and processing
them simultaneously with multiple
processors.
massively parallel computers
Computers that use hundreds or
thousands of processing chips to
attack large computing problems
simultaneously.
196 Part II | Information Technology Infrastructure
FIGURE 6-4
Sequential and parallel processing
During sequential processing,
each task is assigned to one CPU
that processes one instruction
at a time. In parallel processing,
multiple tasks are assigned to
multiple processing units to
expedite the result.
SEQUENTIAL PROCESSING
PARALLEL PROCESSING
Program
Program
Task 1
CPU
Result
Program
CPU
CPU
CPU
CPU
CPU
Task 1
Task 2
Task 3
Task 4
Task 5
Task 2
CPU
Result
Result
Storage, Input and Output Technology
The capabilities of computer systems depend not only on the speed and capacity of the CPU
but also on the speed, capacity, and design of storage, input, and output technology. Storage,
input, and output devices are called peripheral devices because they are outside the main
computer system unit.
Secondary Storage Technology
secondary storage
Relatively long-term, nonvolatile storage of data outside the CPU and primary storage.
magnetic disk
A secondary storage medium in which
data are stored by means of magnetized spots on a hard or floppy disk.
hard disk
Magnetic disk resembling a thin
metallic platter; used in large computer systems and in most PCs.
floppy disk
Removable magnetic disk storage primarily used with PCs.
RAID (Redundant Array of
Inexpensive Disks)
Disk storage technology to boost disk
performance by packaging more than
100 smaller disk drives with a controller chip and specialized software
in a single large unit to deliver data
over multiple paths simultaneously.
Electronic commerce and electronic business have made storage a strategic technology.
Although electronic commerce and electronic business are reducing manual processes, data
of all types must be stored electronically and available whenever needed. Most of the information used by a computer application is stored on secondary storage devices located outside
of the primary storage area. Secondary storage is used for relatively long-term storage of data
outside the CPU. Secondary storage is nonvolatile and retains data even when the computer
is turned off. The most important secondary storage technologies are magnetic disk, optical
disk, and magnetic tape.
Magnetic Disk The most widely used secondary storage medium today is magnetic
disk. There are two kinds of magnetic disks: floppy disks (used in PCs) and hard disks
(used on large commercial disk drives and PCs). Large mainframe or midrange computer
systems have multiple hard disk drives because they require immense disk storage capacity in the gigabyte and terabyte range. PCs also use floppy disks, which are removable
and portable, with lower storage capacities and access rates than hard disks. Removable
disk drives are becoming popular backup storage alternatives for PC systems. Magnetic
disks on both large and small computers permit direct access to individual records so that
data stored on the disk can be directly accessed regardless of the order in which the data
were originally recorded. Disk technology is useful for systems requiring rapid and direct
access to data.
Disk drive performance can be further enhanced by using a disk technology called RAID
(Redundant Array of Inexpensive Disks). RAID devices package more than a hundred disk
drives, a controller chip, and specialized software into a single large unit. Traditional disk
drives deliver data from the disk drive along a single path, but RAID delivers data over multiple paths simultaneously, improving disk access time and reliability. For most RAID systems, data on a failed disk can be restored automatically without the computer system having
to be shut down.
Optical Disks Optical disks, also called compact disks or laser optical disks, use laser technology to store massive quantities of data in a highly compact form. They are available for
both PCs and large computers. The most common optical disk system used with PCs is called
Chapter 6 | Hardware and Software in the Enterprise 197
Secondary storage devices such as
floppy disks, optical disks, and hard
disks are used to store large quantities of data outside the CPU and primary storage. They provide direct
access to data for easy retrieval.
CD-ROM (compact disk read-only memory). A 4.75-inch compact disk for PCs can store
up to 660 megabytes, nearly 300 times more than a high-density floppy disk. Optical disks are
most appropriate for applications where enormous quantities of unchanging data must be
stored compactly for easy retrieval or for applications combining text, sound, and images.
CD-ROM is read-only storage. No new data can be written to it; it can only be read.
WORM (write once/read many) and CD-R (compact disk-recordable) optical disk systems
allow users to record data only once on an optical disk. Once written, the data cannot be
erased but can be read indefinitely. CD-RW (CD-ReWritable) technology has been developed to allow users to create rewritable optical disks for applications requiring large volumes
of storage where the information is only occasionally updated.
Digital video disks (DVDs), also called digital versatile disks, are optical disks the same
size as CD-ROMs but of even higher capacity. They can hold a minimum of 4.7 gigabytes of
data, enough to store a full-length, high-quality motion picture. DVDs are being used to
store movies and multimedia applications using large amounts of video and graphics, but
they may replace CD-ROMs because they can store large amounts of digitized text, graphics,
audio, and video data. Initially, DVDs were read-only; writable and rewritable DVD drives
and media are now available.
CD-ROM (compact disk read-only
memory)
Read-only optical disk storage used for
imaging, reference, and database
applications with massive amounts of
unchanging data and for multimedia.
Magnetic Tape Magnetic tape is an older storage technology that still is employed for
magnetic tape
Inexpensive, older secondary-storage
medium in which large volumes of
information are stored sequentially by
means of magnetized and nonmagnetized spots on tape.
secondary storage of large quantities of data that are needed rapidly but not instantly.
Magnetic tape is inexpensive and relatively stable. However, it stores data sequentially and is
relatively slow compared to the speed of other secondary storage media. In order to find an
individual record stored on magnetic tape, such as an employment record, the tape must be
read from the beginning up to the location of the desired record.
CD-RW (CD-ReWritable)
Optical disk storage that can be
rewritten many times by users.
digital video disk (DVD)
High-capacity optical storage
medium that can store full-length
videos and large amounts of data.
Storage Networking To meet the escalating demand for data-intensive graphics, Web
transactions, and other digital firm applications, the amount of data that companies need to
store is doubling every 12 to 18 months. Companies are turning to new kinds of storage infrastructures to deal with the complexity and cost of mushrooming storage requirements.
Large companies have many different storage resources—disk drives, tape backup drives,
RAID, and other devices that may be scattered in many different locations. This arrangement is expensive to manage and makes it difficult to access data across the enterprise.
Storage networking technology enables firms to manage all of their storage resources centrally by providing an overall storage plan for all the storage devices in the enterprise.
There are alternative storage networking arrangements. In direct-attached storage, storage
devices are connected directly to individual servers and must be accessed through each
server, which can create bottlenecks. Network-attached storage (NAS) overcomes this problem by attaching high-speed RAID storage devices to a network so that the devices in the network can access this storage through a specialized server dedicated to file service and storage.
Storage-area networks (SANs) go one step further by placing multiple storage devices on a
separate high-speed network dedicated to storage purposes. A storage area network (SAN) is
a specialized high-speed network dedicated to storage that connects different kinds of storage
devices, such as tape libraries and disk arrays. The SAN storage devices are located on their
network-attached storage (NAS)
Attaching high-speed RAID storage
devices to a network so that the
devices in the network can access
these storage devices through a specialized server dedicated to file service
and storage.
storage area network (SAN)
A high-speed network dedicated to
storage that connects different kinds
of storage devices, such as tape
libraries and disk arrays so they can
be shared by multiple servers.
198 Part II | Information Technology Infrastructure
FIGURE 6-5
A storage area network (SAN)
A typical SAN consists of a
server, storage devices, and networking devices and is used
strictly for storage. The SAN
stores data on many different
types of storage devices, providing data to the enterprise. The
SAN supports communication
between any server and the
storage unit as well as between
different storage devices in the
network.
Tape Library
RAID
SAN
RAID
RAID
Server
User
User
Server
User
User
User
own network and connected using a high-transmission technology such as Fibre Channel.
The network moves data among pools of servers and storage devices, creating an enterprisewide infrastructure for data storage. The SAN creates a large central pool of storage that can
be shared by multiple servers so that users can rapidly share data across the SAN. Figure 6-5
illustrates how a SAN works.
SANs can be expensive and difficult to manage, but they are very useful for companies
that need to share information across applications and computing platforms. SANs can help
these companies consolidate their storage resources and provide rapid data access for widely
distributed users.
Input and Output Devices
radio-frequency identification
(RFID)
Technology using tiny tags with
embedded microchips containing
data about an item and its location
to transmit short-distance radio signals to special RFID readers that
then pass the data on to a computer
for processing.
Human beings interact with computer systems largely through input and output devices.
Input devices gather data and convert them into electronic form for use by the computer,
whereas output devices display data after they have been processed. Table 6-2 describes the
principal input and output devices.
The principal input devices consist of keyboards, pointing devices (such as the computer
mouse and touch screens), and source data automation technologies (optical and magnetic
ink character recognition, pen-based input, digital scanners, audio input, and sensors),
which capture data in computer-readable form at the time and place they are created. They
also include radio-frequency identification (RFID) devices that use tiny tags with embedded microchips containing data about an item and its location to transmit radio signals over
a short distance to special RFID readers. The RFID readers then pass the data over a network to a computer for processing. RFID is especially useful for tracking the locations of
items as they move through the supply chain. The principal output devices are cathode ray
tube terminals (CRTs), sometimes called video display terminals (VDTs), printers, and
audio output.
Chapter 6 | Hardware and Software in the Enterprise 199
TABLE 6-2
Input and Output Devices
Input Device
Description
Keyboard
Principal method of data entry for text and numerical data.
Computer mouse
Handheld device with point-and-click capabilities that is usually connected to the computer by a cable.
The computer user can move the mouse around on a desktop to control the cursor’s position on a computer display screen, pushing a button to select a command. Trackballs and touch pads often are used in
place of the mouse as pointing devices on laptop PCs.
Touch screen
Device that allows users to enter limited amounts of data by touching the surface of a sensitized video display monitor with a finger or a pointer. Often found in information kiosks in retail stores, restaurants, and
shopping malls.
Optical character
recognition
Device that can translate specially designed marks, characters, and codes into digital form. The most
widely used optical code is the bar code, which is used in point-of-sale systems in supermarkets and retail
stores. The codes can include time, date, and location data in addition to identification data.
Magnetic ink
character
recognition (MICR)
Technology used primarily in check processing for the banking industry. Characters on the bottom of a
check identify the bank, checking account, and check number and are preprinted using special magnetic
ink. A MICR reader translates these characters into digital form for the computer.
Pen-based input
Handwriting-recognition devices, such as pen-based tablets, notebooks, and notepads, that convert the
motion made by an electronic stylus pressing on a touch-sensitive tablet screen into digital form.
Digital scanner
Device that translates images, such as pictures or documents, into digital form; essential component of
image-processing systems.
Audio input
Voice input devices that convert spoken words into digital form for processing by the computer.
Microphones and tape cassette players can serve as input devices for music and other sounds.
Sensors
Devices that collect data directly from the environment for input into a computer system. For instance,
today’s farmers can use sensors to monitor the moisture of the soil in their fields to help them with irrigation.
Radio-frequency
identification (RFID)
Devices that use tags with microchips to transmit information about items and their locations to special
RFID readers. Useful for tracking items as they move through the supply chain.
Output Device
Description
Cathode ray tube
(CRT)
Electronic gun that shoots a beam of electrons illuminating tiny points on a display screen. Laptop
computers use flat panel displays, which are less bulky than CRT monitors.
Printers
Devices that produce a printed hard copy of information output. They include impact printers (such as
dot matrix printers) and nonimpact printers (such as laser, inkjet, and thermal transfer printers).
Audio output
Voice output devices that convert digital output data back into intelligible speech. Other audio output,
such as music, can be delivered by speakers connected to the computer.
Batch and Online Input and Processing
The manner in which data are input into the computer affects how the data can be
processed. Information systems collect and process information in one of two ways: through
batch or through online processing. In batch processing, transactions, such as orders or payroll time cards, are accumulated and stored in a group or batch until the time when, because
of some reporting cycle, it is efficient or necessary to process them. Batch processing is found
primarily in older systems where users need only occasional reports. In online processing,
the user enters transactions into a device (such as a data entry keyboard or bar code reader)
that is directly connected to the computer system. The transactions usually are processed
immediately. Most processing today is online processing. Batch systems often use tape as a
storage medium, whereas online processing systems use disk storage, which permits immediate access to specific items.
batch processing
A method of collecting and processing
data in which transactions are accumulated and stored until a specified
time when it is convenient or necessary to process them as a group.
online processing
A method of collecting and processing
data in which transactions are
entered directly into the computer system and processed immediately.
200 Part II | Information Technology Infrastructure
Interactive Multimedia
multimedia
The integration of two or more types
of media, such as text, graphics,
sound, voice, full-motion video, or
animation, into a computer-based
application.
streaming technology
Technology for transferring data so
that they can be processed as a steady
and continuous stream.
MP3 (MPEG3)
Compression standard that can compress audio files for transfer over the
Internet with virtually no loss in
quality.
The processing, input, output, and storage technologies we have just described can be used
to create multimedia applications that integrate sound and full-motion video, or animation
with graphics and text into a computer-based application. Multimedia is becoming the foundation of new consumer products and services, such as electronic books and newspapers,
electronic classroom-presentation technologies, full-motion videoconferencing, imaging,
graphics design tools, and video and voice mail. PCs today come with built-in multimedia
capabilities, including high-resolution color monitors, CD-ROM drives, or DVD drives to
store video, audio, and graphic data, and stereo speakers for amplifying audio output.
Interactive Web pages replete with graphics, sound, animation, and full-motion video
have made multimedia popular on the Internet. For example, visitors to the CNN.com Web
site can access news stories from CNN, photos, on-air transcripts, video clips, and audio
clips. The video and audio clips are made available using streaming technology, which
allows audio and video data to be processed as a steady and continuous stream as they are
downloaded from the Web.
Multimedia Web sites are also being used to sell digital products, such as digitized music
clips. A compression standard known as MP3, also called MPEG3, which stands for Motion
Picture Experts Group, audio layer 3, can compress audio files down to one-tenth or one-twelfth
of their original size with virtually no loss in quality. Visitors to Web sites such as MP3.com can
download MP3 music clips over the Internet and play them on their own computers.
6.2 CATEGORIES
mainframe
Largest category of computer, used for
major business processing.
midrange computer
Middle-size computer that is capable
of supporting the computing needs of
smaller organizations or of managing
networks of other computers.
World-traveled singer-songwriterproducer David Goldman’s original
songs encompass various musical
styles including Blues, Acoustic
Rock, Pop/Jazz, Latin, and Asian.
One can preview tracks from his
albums at the multimedia Web sites
www.DavidGoldman.com or
www.cdbaby/goldman.com.
OF
COMPUTERS
AND
COMPUTER SYSTEMS
Contemporary computers can be categorized as mainframes, midrange computers, PCs,
workstations, and supercomputers. Managers need to understand the capabilities of each of
these types of computers, and why some types are more appropriate for certain processing
work than others.
Classifying Computers
A mainframe is the largest computer, a powerhouse with massive memory and extremely
rapid processing power. It is used for very large business, scientific, or military applications where a computer must handle massive amounts of data or many complicated
processes. A midrange computer is less powerful, less expensive, and smaller than a
Chapter 6 | Hardware and Software in the Enterprise 201
mainframe but capable of supporting the computing needs of smaller organizations or of
managing networks of other computers. Midrange computers can be minicomputers,
which are used in systems for universities, factories, or research laboratories, or they can
be servers, which are used for managing internal company networks or Web sites. Server
computers are specifically optimized to support a computer network, enabling users to
share files, software, peripheral devices (such as printers), or other network resources.
Servers have large memory and disk-storage capacity, high-speed communications capabilities, and powerful CPUs.
Servers have become important components of firms’ IT infrastructures because they
provide the hardware platform for electronic commerce. By adding special software, they
can be customized to deliver Web pages, process purchase and sale transactions, or
exchange data with systems inside the company. Organizations with heavy electronic commerce requirements and massive Web sites are running their Web and electronic commerce
applications on multiple servers in server farms in computing centers run by commercial
vendors such as IBM.
A personal computer (PC), which is sometimes referred to as a microcomputer, is one
that can be placed on a desktop or carried from room to room. Smaller laptop PCs are often
used as portable desktops on the road. PCs are used as personal machines as well as in business. A workstation also fits on a desktop but has more powerful mathematical and graphicsprocessing capabilities than a PC and can perform more complicated tasks than a PC in the
same amount of time. Workstations are used for scientific, engineering, and design work that
requires powerful graphics or computational capabilities.
A supercomputer is a highly sophisticated and powerful computer that is used for tasks
requiring extremely rapid and complex calculations with hundreds of thousands of variable
factors. Supercomputers use parallel processors and traditionally have been used in scientific
and military work, such as classified weapons research and weather forecasting, which use
complex mathematical models. They are now starting to be used in business for the manipulation of vast quantities of data.
Computer Networks and Client/Server
Computing
Today, stand-alone computers have been replaced by computers in networks for most processing tasks. The use of multiple computers linked by a communications network for processing is called distributed processing. In contrast with centralized processing, in which
all processing is accomplished by one large central computer, distributed processing distributes the processing work among PCs, midrange computers, and mainframes linked
together.
One widely used form of distributed processing is client/server computing. Client/server
computing splits processing between “clients” and “servers.” Both are on the network, but
each machine is assigned functions it is best suited to perform. The client is the user point of
entry for the required function and is normally a desktop computer, workstation, or laptop
computer. The user generally interacts directly only with the client portion of the application,
often to input data or retrieve data for further analysis. The server provides the client with services. The server could be a mainframe or another desktop computer, but specialized server
computers are often used in this role. Servers store and process shared data and also perform
back-end functions not visible to users, such as managing network activities. Figure 6-6 illustrates the client/server computing concept. Computing on the Internet uses the client/server
model (see Chapter 9).
Figure 6-7 illustrates five different ways that the components of an application could be
partitioned between the client and the server. The interface component is essentially the
application interface—how the application appears visually to the user. The application
logic component consists of the processing logic, which is shaped by the organization’s business rules. (An example might be that a salaried employee is only to be paid monthly.) The
data management component consists of the storage and management of the data used by
minicomputer
Middle-range computer used in systems for universities, factories, or
research laboratories.
server
Computer specifically optimized to
provide software and other resources
to other computers over a network.
Internet Connection
The Internet Connection for this
chapter will direct you to a series
of Web sites where you can complete an exercise to survey the
products and services of major
computer hardware vendors and
the use of Web sites in the computer hardware industry.
server farm
Large group of servers maintained by
a commercial vendor and made available to subscribers for electronic commerce and other activities requiring
heavy use of servers.
personal computer (PC)
Small desktop or portable computer.
workstation
Desktop computer with powerful
graphics and mathematical capabilities and the ability to perform several
complicated tasks at once.
supercomputer
Highly sophisticated and powerful
computer that can perform very complex computations extremely rapidly.
distributed processing
The distribution of computer processing work among multiple computers
linked by a communications network.
centralized processing
Processing that is accomplished by
one large central computer.
client/server computing
A model for computing that splits
processing between “clients” and
“servers” on a network, assigning
functions to the machine most able to
perform the function.
client
The user point of entry for the required
function in client/server computing.
Normally a desktop computer, workstation, or laptop computer.
202 Part II | Information Technology Infrastructure
FIGURE 6-6
Client/server computing
In client/server computing,
computer processing is split
between client machines and
server machines linked by a network. Users interface with the
client machines.
Server
Client
Requests
Data and services
• User interface
• Application function
downsizing
The process of transferring applications from large computers to smaller
ones.
• Data
• Application function
• Network resources
the application. The exact division of tasks depends on the requirements of each application,
including its processing needs, the number of users, and the available resources.
In some firms client/server networks with PCs have actually replaced mainframes and
minicomputers. The process of transferring applications from large computers to smaller
ones is called downsizing. Downsizing can potentially reduce computing costs because
memory and processing power on a PC cost a fraction of their equivalent on a mainframe.
The decision to downsize involves many factors in addition to the cost of computer hardware, including the need for new software, training, and perhaps new organizational procedures (see the discussion of total cost of ownership in Section 6.4).
Network Computers and Peer-to-Peer
Computing
network computer (NC)
Simplified desktop computer that
does not store software programs or
data permanently. Users download
whatever software or data they need
from a central computer over the
Internet or an organization’s own
internal network.
FIGURE 6-7
In one form of client/server computing, client processing and storage capabilities are so minimal that the bulk of computer processing occurs on the server. The term thin client is sometimes used to refer to the client in this arrangement. Thin clients with minimal memory,
storage, and processor power that are designed to work on networks are called network computers (NCs). NC users download whatever software or data they need from a central computer over the Internet or an organization’s internal network. The central computer also
saves information for the user and makes it available for later retrieval, effectively eliminating the need for secondary storage devices such as hard disks, floppy disks, CD-ROMs, and
their drives.
NC systems are less expensive and less complicated to maintain than PCs with local processing and storage, and NCs can be administered and updated from a central network
server. Software programs and applications would not have to be purchased, installed, and
upgraded for each user because software would be delivered and maintained from one cen-
Types of client/server computing
There are various ways in which
an application’s interface, logic,
and data management components can be divided among the
clients and servers in a network.
SERVER
Data
Data
Data
Logic
Logic
Logic
Data
Data
Interface
Data
CLIENT
Interface
Interface
Logic
Logic
Logic
Interface
Interface
Interface
Chapter 6 | Hardware and Software in the Enterprise 203
tral point. Network computers and centralized software distribution thus could increase management’s control over the organization’s computing function. However, if a network failure
occurs, hundreds or thousands of employees would not be able to use their computers,
whereas people could keep working if they had full-function PCs. Companies should closely
examine how network computers might fit into their information technology infrastructure.
Peer-to-Peer Computing
Another form of distributed processing, called peer-to-peer computing, puts processing
power back on users’ desktops, linking these computers so that they can share processing
tasks. Individual PCs, workstations, or other computers can share data, disk space, or processing power for a variety of tasks when they are linked in a network, including the Internet.
The peer-to-peer computing model stands in contrast to the network computing model
because processing power resides on individual desktops and these computers work together
without a server or any central controlling authority.
One form of peer-to-peer computing called grid computing uses special software to
reclaim unused computing cycles on desktop computers in a network and harness them into
a “virtual supercomputer” for solving large computational problems. Grid computing breaks
down a large problem into small pieces that can run on many separate machines in a network that are organized into a computational grid. Computing resources can be shared over
the grid with anyone authorized to use them, regardless of their location. When a firm needs
additional processing capacity, that company could distribute its workload across a grid
instead of having to purchase additional hardware. For example, Bank One distributed processing for massive risk-analytics used in its interest-rate derivatives trading business across a
grid of 100 to 150 servers (Schmerken, 2003).
Another peer-to-peer computing application that has gained some notoriety is the use of
file-sharing systems, such as Kazaa for swapping music and other files. Chapter 5 and the case
study concluding Chapter 4 discuss the ethical and business implications of these systems.
Each form of computer processing can provide benefits, depending on the business needs
of the organization. Network computing might be appropriate for firms with highly centralized information technology infrastructures. Peer-to-peer computing could be very useful in
workgroup collaboration for accelerating communication processes and reducing collaboration costs, but it can create self-organizing networks with unreliable and untrusted components that could be difficult for organizations to control (Kubiatowicz, 2003; Schoder and
Fischbach, 2003).
6.3 TYPES
OF
peer-to-peer computing
Form of distributed processing that
links computers via the Internet or
private networks so that they can
share data or processing tasks.
grid computing
Applying the resources of many computers in a network to a single problem.
SOFTWARE
To play a useful role in the firm’s information technology infrastructure, computer hardware
requires computer software. Chapter 1 defined computer software as the detailed instructions that control the operation of a computer system. Selecting appropriate software for the
organization is a key management decision.
A software program is a series of statements or instructions to the computer. The process
of writing or coding programs is termed programming, and individuals who specialize in this
task are called programmers.
There are two major types of software: system software and application software. Each
kind performs a different function. System software is a set of generalized programs that
manage the computer’s resources, such as the central processor, communications links, and
peripheral devices. Programmers who write system software are called system programmers.
Application software describes the programs that are written for or by users to apply the
computer to a specific task. Software for processing an order or generating a mailing list is
application software. Programmers who write application software are called application
programmers.
The types of software are interrelated and can be thought of as a set of nested boxes, each
of which must interact closely with the other boxes surrounding it. Figure 6-8 illustrates this
program
A series of statements or instructions
to the computer.
system software
Generalized programs that manage
the computer’s resources, such as the
central processor, communications
links, and peripheral devices.
application software
Programs written for a specific application to perform functions specified
by end users.
204 Part II | Information Technology Infrastructure
FIGURE 6-8
The major types of software
The relationship between the system software, application software, and users can be illustrated
by a series of nested boxes.
System software—consisting
of operating systems, language
translators, and utility
programs—controls access to the
hardware. Application software,
such as the programming languages and “fourth-generation”
languages, must work through
the system software to operate.
The user interacts primarily with
the application software.
SYSTEM SOFTWARE
Operating System
Application software
System software
Schedules computer events
Allocates computer resources
Monitors events
Language Translators
Hardware
Interpreters
Compilers
Utility Programs
Users
Routine operations (e.g., sort, list, print)
Manage data (e.g., create files, merge files)
APPLICATION SOFTWARE
Programming languages
Assembly language
FORTRAN
COBOL
BASIC
PASCAL
C
Fourth-generation languages and PC software tools
relationship. The system software surrounds and controls access to the hardware. Application
software must work through the system software in order to operate. End users work primarily
with application software. Each type of software must be specially designed to a specific
machine to ensure its compatibility.
System Software and PC Operating Systems
operating system
The system software that manages
and controls the activities of the computer.
source code
Program instructions written in a
high-level language that must be
translated into machine language to
be executed by the computer.
compiler
Special system software that translates a high-level language into
machine language for execution by
the computer.
System software coordinates the various parts of the computer system and mediates between
application software and computer hardware. The system software that manages and controls
the computer’s activities is called the operating system. Other system software consists of
computer language translation programs that convert programming languages into machine
language that can be understood by the computer and utility programs that perform common processing tasks.
The operating system is the computer system’s chief manager. The operating system allocates and assigns system resources, schedules the use of computer resources and computer
jobs, and monitors computer system activities. The operating system provides locations in
primary memory for data and programs, and controls the input and output devices, such as
printers, terminals, and telecommunication links. The operating system also coordinates the
scheduling of work in various areas of the computer so that different parts of different jobs
can be worked on at the same time. Finally, the operating system keeps track of each computer job and may also keep track of who is using the system, of what programs have been
run, and of any unauthorized attempts to access the system. Operating system capabilities,
such as multiprogramming, virtual storage, time-sharing, and multiprocessing, enable the
computer to handle many different tasks and users at the same time. Table 6-3 describes
these capabilities.
Language Translation and Utility Software
System software includes special language translator programs that translate high-level language programs written in programming languages, such as COBOL, FORTRAN, or C, into
machine language that the computer can execute. The program in the high-level language
before translation into machine language is called source code. A compiler translates source
code into machine code called object code, which is linked to other object code modules and
then executed by the computer. Some programming languages, such as BASIC, do not use a
Chapter 6 | Hardware and Software in the Enterprise 205
TABLE 6-3
Operating System Capabilities
Operating System
Capability
Description
Multiprogramming
Multiple programs can share a computer system’s resources at any one time through concurrent use of
the CPU. Only one program is actually using the CPU at any given moment, but the input/output needs
of other programs can be serviced at the same time.
Virtual storage
Handles programs more efficiently by breaking down the programs into tiny sections that are read into
memory only when needed. The rest of each program is stored on disk until it is required. Virtual storage
allows very large programs to be executed by small machines, or a large number of programs to be executed concurrently by a single machine.
Time-sharing
Allows many users to share computer processing resources simultaneously by allocating each of thousands
of users a tiny slice of computer time to perform computing tasks and transferring processing from user to
user. This arrangement permits many users to be connected to a CPU simultaneously, with each receiving only a tiny amount of CPU time.
Multiprocessing
Links together two or more CPUs to work in parallel in a single computer system. The operating system
can assign multiple CPUs to execute different instructions from the same program or from different programs simultaneously, dividing the work between the CPUs.
compiler but an interpreter, which translates each source code statement one at a time into
machine code as it executes.
System software includes utility programs for routine, repetitive tasks, such as copying,
clearing primary storage, computing a square root, or sorting. Utility programs can be shared
by all users of a computer system and can be used in many different information system
applications when requested.
PC Operating Systems and Graphical User Interfaces
Like any other software, PC software is based on specific operating systems and computer
hardware. Software written for one PC operating system generally cannot run on another.
Table 6-4 compares the leading PC operating systems: Windows XP, Windows 2000,
Windows Server 2003, Windows 98 and Windows Me, Windows CE, UNIX, Linux, the
Macintosh operating system, and DOS.
When a user interacts with a computer, including a PC, the interaction is controlled by
an operating system. A user communicates with an operating system through the user interface of that operating system. Contemporary PC operating systems use a graphical user
interface, often called a GUI, which makes extensive use of icons, buttons, bars, and boxes
to perform tasks. It has become the dominant model for the user interface of PC operating
systems and for many types of application software.
Microsoft’s Windows family of operating systems provides a streamlined graphical user
interface that arranges icons to provide instant access to common tasks. They can perform
multiple programming tasks simultaneously and have powerful networking capabilities,
including the capability to integrate fax, e-mail, and scheduling programs. They include
tools for group collaboration, accessing information from the Internet, and creating and storing Web pages. Windows XP (for eXPerience), the most recent Windows operating system,
is reliable, robust, and relatively easy to use. The Windows XP Home Edition is for home
users and the Windows XP Professional Edition targets mobile and business users. Windows
98 and Windows Me are earlier versions of this operating system for home users.
Windows 2000 is used as an operating system for high-performance desktop and laptop
computers and for network servers. Windows operating systems for network servers provide
network management functions, including tools for creating and operating Web sites and
other Internet services. In addition to Windows 2000, they include Windows Server 2003,
the most recent Windows server product, and Windows NT, which is an earlier version of this
software. There are multiple editions of these server operating systems to meet the needs of
small businesses, medium and large businesses, and businesses that have massive computer
graphical user interface (GUI)
The part of an operating system users
interact with that uses graphic icons
and the computer mouse to issue
commands and make selections.
Windows XP
Powerful Windows operating system
that provides reliability, robustness,
and ease of use for both corporate and
home PC users.
Windows 98
Earlier version of the Windows operating system that is closely integrated
with the Internet.
Windows 2000
Windows operating system for highperformance PCs and network servers.
Supports networking, multitasking,
multiprocessing, and Internet services.
Windows Server 2003
Most recent Windows operating system for servers.
206 Part II | Information Technology Infrastructure
TABLE 6-4
Leading PC Operating Systems
Operating System
Features
Windows XP
Reliable, robust operating system for powerful PCs with versions for both home and corporate users.
Features support of the Internet, multimedia, and group collaboration, along with powerful networking,
security, and corporate management capabilities.
Windows 2000
Operating system for PCs, workstations, and network servers. Supports multitasking, multiprocessing,
intensive networking, and Internet services for corporate computing.
Windows Server 2003
Most recent Windows operating system for servers.
Windows 98/Me
Earlier versions of the Windows operating system for home users.
Windows CE
Pared-down version of the Windows operating system, including its graphical user interface, for handheld
computers and wireless communication devices. Designed to run on small, handheld computers, personal digital assistants, wireless communication devices, and other information appliances.
UNIX
Used for powerful PCs, workstations, and network servers. Supports multitasking, multiuser processing,
and networking. Is portable to different models of computer hardware.
Linux
Free, reliable alternative to UNIX and Windows operating systems that runs on many different types of
computer hardware and can be modified by software developers.
Mac OS
Operating system for the Macintosh computer, featuring multitasking, powerful multimedia and networking capabilities, and a mouse-driven graphical user interface. Supports connecting to and publishing on
the Internet.
DOS
16-bit operating system for older PCs based on the IBM PC standard. Does not support multitasking and
limits the size of a program in memory to 640K.
UNIX
Operating system for all types of computers, which is machine independent
and supports multiuser processing,
multitasking, and networking. Used
in high-end workstations and servers.
Linux
Reliable and compactly designed
operating system that is an offshoot of
UNIX and that can run on many different hardware platforms and is
available free or at very low cost.
Used as alternative to UNIX and
Windows NT.
open-source software
Software that provides free access to
its program code, allowing users to
modify the program code to make
improvements.
centers and processing requirements. Windows Server 2003 has new functions to facilitate
wireless connections to corporate networks, tools for Web services, and tighter links to
Microsoft data management and desktop software products.
UNIX is an interactive, multiuser, multitasking operating system developed by Bell
Laboratories in 1969 to help scientific researchers share data. UNIX was designed to connect
various machines together and is highly supportive of communications and networking.
UNIX is often used on workstations and servers, and provides the reliability and scalability
for running large systems on high-end servers. UNIX can run on many different kinds of
computers and can be easily customized. Application programs that run under UNIX can be
ported from one computer to run on a different computer with little modification.
UNIX is considered powerful but very complex, with a legion of commands. Graphical
user interfaces have been developed for UNIX. UNIX also poses some security problems
because multiple jobs and users can access the same file simultaneously. Vendors have developed different versions of UNIX that are incompatible, thereby limiting software portability.
Linux is a UNIX-like operating system that can be downloaded from the Internet free of
charge or purchased for a small fee from companies that provide additional tools for the software. It is free, reliable, compactly designed, and capable of running on many different hardware platforms, including servers, handheld computers, and consumer electronics. Linux
has become popular during the past few years among sophisticated computer users and businesses as a robust low-cost alternative to UNIX and the Windows operating systems. Major
hardware and software vendors are starting to provide versions of their products that can run
on Linux. The software instructions for Linux are available along with the operating system
software, so software developers can modify the software to fit their particular needs. The
Window on Management describes why business use of Linux is growing.
Linux is an example of open-source software, which provides all computer users with free
access to its program code, so they can modify the code to fix errors or to make improvements. Open-source software, such as Linux, is not owned by any company or individual. A
global network of programmers and users manages and modifies the software, usually without being paid to do so.
Chapter 6 | Hardware and Software in the Enterprise 207
Microsoft Windows XP is a robust
and easy-to-use operating system
for corporate and home applications.
Programming Languages and Contemporary
Software Tools
Application software is primarily concerned with accomplishing the tasks of end users. Many
different languages and software tools can be used to develop application software. Managers
should understand which software tools and programming languages are appropriate for
their organization’s objectives.
Application Programming Languages for Business
The first generation of computer languages consisted of machine language, which required
the programmer to write all program instructions in the 0s and 1s of binary code and to
specify storage locations for every instruction and item of data used. Programming in
machine language was a very slow, labor-intensive process. As computer hardware
improved and processing speed and memory size increased, programming languages
became progressively easier for humans to understand and use. From the mid-1950s to the
mid-1970s, high-level programming languages emerged, allowing programs to be written
with regular words using sentence-like statements.
Table 6-5 describes the major programming languages used for business and scientific
work. Programmable languages for business applications include COBOL, C, C, and
Visual Basic. COBOL (COmmon Business Oriented Language) was developed in the
early 1960s for processing large data files with alphanumeric characters (mixed alphabetic
and numeric data) and for performing repetitive tasks, such as payroll. It is not well suited for
complex, mathematical calculations but has been used for many business processing and
reporting tasks. C is a powerful and efficient language developed in the early 1970s that
combines machine portability with tight control and efficient use of computer resources. C
is used primarily by professional programmers to create operating systems and application
software, especially for PCs, and it can work on a variety of different computers. C is a
newer version of C that is object oriented. (See the discussion on object-oriented programming later in this section.) It has all the capabilities of C plus additional features for working
with software objects. C is used for developing application software. Visual Basic is a
widely used visual programming tool and environment for creating applications that run on
Microsoft Windows. A visual programming language allows users to manipulate graphic or
iconic elements to create programs. With Visual Basic, users develop programs by using a
machine language
A programming language consisting
of the 1s and 0s of binary code.
COBOL (COmmon Business
Oriented Language)
Programming language for business
applications that can process large data
files with alphanumeric characters.
C
A powerful programming language
with tight control and efficiency of
execution; portable across different
microprocessors and used primarily
with PCs.
C
Object-oriented version of the C programming language.
Visual Basic
Widely used visual programming tool
and environment for creating applications that run on Microsoft Windows.
visual programming
The construction of software programs
by selecting and arranging graphic or
iconic elements representing sections
of program code.
208 Part II | Information Technology Infrastructure
WINDOW ON MANAGEMENT
The Case for Linux
U
nilever is a $52
billion consumer
products company,
producing such wellknown brands as
Dove soap, Lipton
tea, Breyers ice
cream, Hellmann’s mayonnaise, and
Knorr soups. After years of running disparate UNIX systems on its global servers,
the company is switching to Linux and
Intel-standard hardware. Because of so
many disparate systems and standards, the
company faced escalating hardware and
systems support costs. Standardizing on
Linux will help Unilever, which operates
computer systems in 80 countries, to
lower its infrastructure costs. Unilever
does not want to have to worry about operating system or hardware compatibility
issues in all those countries.
Migrating from UNIX to Linux on its
e-mail servers, Web servers, and security
software applications will help Unilever
simplify and standardize its IT infrastructure while providing increased performance. Colin Hope-Murray, Unilever’s
chief technology officer (CTO), says,
“Every time we put in Linux, we are
amazed and surprised at its speed and the
reliability with which we can run it.” He
says it runs about three times faster than
other popular operating systems and
rarely, if ever, crashes. As Linux software
matures, Unilever expects to use it for
running its heavy-duty data management,
customer relationship management, and
enterprise resource planning systems.
Other companies are realizing similar
benefits from switching to Linux. Frank
Pipolo, the director of Internet operations
for Internetworld.com, says his company
has not had to reboot its server in the two
years they have been using Linux. Linux
has performed with such reliability that
even many financial services firms are
now running mission-critical systems on
it. The life sciences industry, including
hospitals, biotechnology firms, individual
physicians, and government laboratories,
has been attracted to Linux because of
its need to integrate data from thousands
of sources. Linux provides an openstandards–based technology that can be
used for this purpose.
The major hardware and software vendors, including IBM, Hewlett-Packard,
Dell, Compaq, Oracle, and SAP, now
offer Linux-compatible versions of their
products. About 25 countries have governmental initiatives promoting Linux and
other open-source software, attracted by
this operating system’s reliability and low
cost. China has adopted Linux as the
mainstream operating system for its server
computers, and the German government
has turned to Linux to avoid being locked
into a specific proprietary system. Linux
continues to improve each day because a
worldwide network of open-source programmers keeps developing new features
and capabilities. Recent upgrades will
make Linux easier to use for accessing
large amounts of data and for running
heavier processing loads. However, some
businesses do not like the idea of having
to constantly upgrade
their systems when
new versions of Linux come out so often.
The area of greatest success for Linux
has been as a server operating system, where
its use is expanding at a compounded
annual rate near 30 percent. However,
business application software that runs on
Linux is in short supply and Linux has yet to
make inroads into desktop applications.
Although there are desktop productivity
tools, such as Sun StarOffice, that work with
Linux, many companies are reluctant to
abandon Microsoft Office tools. Although
StarOffice has a very low purchase price
compared to Office, the costs of training
end users in new desktop software may outweigh the savings from using Linux desktop
tools. Linux on the desktop is confined primarily to workstations performing graphicsintensive processing.
To Think About: Should a company select
Linux as an operating system for its major
business applications? What are the management benefits Linux provides? What are
the business as well as the technology
issues that should be addressed when
making that decision?
Sources: John Zipperer, “Linux Suits Up for the
Enterprise,” Internet World Magazine, April 1,
2003; Todd R. Weiss, “Unilever Dumping UNIX
for Linux in Global Move,” Computerworld,
January 27, 2003; Larry Greenemeier, “Adolescent
Angst,” Information Week, June 2, 2003, and
“Linux Makes Mainstream Moves, Information
Week, January 20, 2003; Jennifer Maselli, “Linux
Lags on the Desktop,” Information Week, April 7,
2003; and Joshua Weinberger, “The One-Stop
Shop?” Baseline Magazine, February 2003.
graphical user interface to choose and modify preselected sections of code written in the
BASIC programming language.
Fourth-Generation Languages
fourth-generation language
A programming language that can be
employed directly by end users or lessskilled programmers to develop computer applications more rapidly than
conventional programming languages.
Fourth-generation languages consist of a variety of software tools that enable end users to
develop software applications with minimal or no technical assistance or that enhance professional programmers’ productivity. Fourth-generation languages tend to be nonprocedural,
or less procedural, than conventional programming languages. Procedural languages require
specification of the sequence of steps, or procedures, that tell the computer what to do and
how to do it. Nonprocedural languages need only specify what has to be accomplished rather
Chapter 6 | Hardware and Software in the Enterprise 209
TABLE 6-5
Application Programming Languages
Programming Language
Description
C
Used primarily by professional programmers to create operating systems and application software, especially for PCs. Combines machine portability with tight control and efficient use of computer
resources and can work on a variety of different computers.
C
Object-oriented version of C that is used for developing application software.
COBOL
Designed for business administration to process large data files with alphanumeric characters (mixed
alphabetic and numeric data).
Visual Basic
Visual programming tool for creating applications running on Windows operating systems.
FORTRAN (FORmula
TRANslator)
Useful for processing numeric data. Some business applications can be written in FORTRAN, but it is
primarily used for scientific and engineering applications.
BASIC (Beginners
All-Purpose Symbolic
Instruction Code)
Developed in 1964 to teach students how to use computers. Easy to use but does few computerprocessing tasks well, even though it does them all. Used primarily in education to teach
programming.
Pascal
Developed in the late 1960s and used primarily in computer science courses to teach sound programming practices.
Assembly language
“Second-generation” language that is very close to machine language and is designed for a specific
machine and specific microprocessors. Gives programmers great control, but it is difficult and costly to
write and learn. Used primarily today in system software.
than provide details about how to carry out the task. Some of these nonprocedural languages
are natural languages that enable users to communicate with the computer using conversational commands resembling human speech.
Table 6-6 shows that there are seven categories of fourth-generation languages: PC software tools, query languages, report generators, graphics languages, application generators,
application software packages, and very high-level programming languages. The table shows
the tools ordered in terms of ease of use by nonprogramming end users. End users are most
likely to work with PC software tools and query languages. Query languages are software
tools that provide immediate online answers to requests for information that are not predefined, such as “Who are the highest-performing sales representatives?” Query languages are
often tied to data management software (described later in this chapter) and to database
management systems (see Chapter 7).
natural language
Nonprocedural language that
enables users to communicate with
the computer using conversational
commands resembling human speech.
query language
Software tool that provides immediate
online answers to requests for information that are not predefined.
Contemporary Tools for Software Development
The need for businesses to fashion systems that are flexible or that can run over the Internet
have stimulated approaches to software development based on object-oriented programming
tools and new programming languages, such as Java, hypertext markup language (HTML),
and eXtensible Markup Language (XML).
Object-Oriented Programming Traditional software development methods have treated
data and procedures as independent components. A separate programming procedure must
be written every time someone wants to take an action on a particular piece of data. The procedures act on data that the program passes to them.
Object-oriented programming combines data and the specific procedures that operate
on those data into one object. The object combines data and program code. Instead of passing data to procedures, programs send a message for an object to perform an operation that is
already embedded in it. (Procedures are termed methods in object-oriented languages.) The
same message may be sent to many different objects, but each will implement that message
differently. For example, an object-oriented financial application might have Customer
objects sending debit and credit messages to Account objects. The Account objects in turn
might maintain Cash-on-Hand, Accounts-Payable, and Accounts-Receivable objects.
object-oriented programming
An approach to software development
that combines data and procedures
into a single object.
210 Part II | Information Technology Infrastructure
TABLE 6-6
Categories of Fourth-Generation Languages
Fourth-Generation Tool
Description
Example
PC software tools
General-purpose application software packages for PCs.
WordPerfect
Microsoft Access
Query language
Languages for retrieving data stored in databases or files. Capable
of supporting requests for information that are not predefined.
SQL
Report generator
Extract data from files or databases to create customized reports in
a wide range of formats not routinely produced by an information
system. Generally provide more control over the way data are
formatted, organized, and displayed than query languages.
Crystal Reports
Graphics language
Retrieve data from files or databases and display them in graphic
format. Some graphics software can perform arithmetic or logical
operations on data as well.
SAS Graph
Systat
Application generator
Contain preprogrammed modules that can generate entire
applications, including Web sites, greatly speeding development.
A user can specify what needs to be done, and the application
generator will create the appropriate program code for input,
validation, update, processing, and reporting.
FOCUS
PowerBuilder
Microsoft FrontPage
Application software
package
Software programs sold or leased by commercial vendors that
eliminate the need for custom-written, in-house software.
PeopleSoft HCM
SAP R/3
Very high-level
programming
language
Generate program code with fewer instructions than conventional
languages, such as COBOL or FORTRAN. Designed primarily as
productivity tools for professional programmers.
APL
Nomad2
Oriented
toward
end users
Oriented
toward IS
professionals
An object’s data are encapsulated from other parts of the system, so each object is an independent software building block that can be used in many different systems without changing the program code. Thus, object-oriented programming is expected to reduce the time
and cost of writing software by producing program code or software chips that can be reused
in other related systems. Productivity gains from object-oriented technology could be magnified if objects were stored in reusable software libraries and explicitly designed for reuse.
However, such benefits are unlikely to be realized unless organizations develop appropriate
standards and procedures for reuse (Kim and Stohr, 1998).
Object-oriented programming is based on the concepts of class and inheritance. Program
code is not written separately for every object but for classes, or general categories, of similar
objects. Objects belonging to a certain class have the features of that class. Classes of objects
in turn can inherit all the structure and behaviors of a more general class and then add variables and behaviors unique to each object. New classes of objects are created by choosing an
existing class and specifying how the new class differs from the existing class, instead of starting from scratch each time.
We can see how class and inheritance work in Figure 6-9, which illustrates the relationships among classes concerning employees and how they are paid. Employee is the common ancestor, or superclass for the other three classes. Salaried, Hourly, and Temporary are
subclasses of Employee. The class name is in the top compartment, the attributes for each
class are in the middle portion of each box, and the list of operations is in the bottom portion of each box. The features that are shared by all employees (id [identification number],
name, address, date hired, position, and pay) are stored in the employee superclass, whereas
each subclass stores features that are specific to that particular type of employee. Specific to
Hourly employees, for example, are their hourly rates and overtime rates. A solid line pointing from the subclass to the superclass is a generalization path showing that the subclasses
Salaried, Hourly, and Temporary have common features that can be generalized into the
superclass Employee.
Chapter 6 | Hardware and Software in the Enterprise 211
FIGURE 6-9
Class and inheritance
This figure illustrates how
classes inherit the common features of their superclass.
Employee
id
name
address
dateHired
position
pay
Salaried
Hourly
Temporary
annualSalary
bonus
hourlyRate
overtimeRate
dailyRate
ytdHours
calcBonus
calcOvertime
determinePermEligibility
Java Java is a platform-independent, object-oriented programming language developed by
Sun Microsystems. Java software is designed to run on any computer or computing device,
regardless of the specific microprocessor or operating system it uses. A Macintosh PC, an
IBM PC running Windows, a Sun server running UNIX, and even a smart cellular phone or
personal digital assistant can share the same Java application.
Java can be used to create miniature programs called “applets” designed to reside on centralized network servers. The network delivers only the applets required for a specific function. With Java applets residing on a network, a user can download only the software functions and data that he or she needs to perform a particular task, such as analyzing the revenue
from one sales territory. The user does not need to maintain large software programs or data
files on his or her desktop machine.
Java is also a very robust language that can handle text, data, graphics, sound, and video,
all within one program if needed. Java applets often are used to provide interactive capabilities for Web pages, such as animated cartoons or real-time news tickers, or functionality to
calculate a loan payment schedule online in response to financial data input by the user.
Java can also let PC users manipulate data on networked systems using Web browsers,
(described later in this chapter) reducing the need to write specialized software. Increasingly,
Java is being used for more complex e-commerce and e-business applications that require
communication with an organization’s back-end transaction processing systems.
Despite these benefits, Java has not yet fulfilled its early promise to revolutionize software development and use. Programs written in current versions of Java tend to run slower
than “native” programs, although high-performance versions of Java are under development. Vendors such as Microsoft are supporting alternative versions of Java that include
subtle differences that affect Java’s performance in different pieces of hardware and operating systems.
Hypertext Markup Language (HTML) and XML Hypertext markup language
(HTML) is a page description language for creating hypertext or hypermedia documents,
such as Web pages. (See the discussions of hypermedia in Chapter 7 and of Web pages in
Chapter 9.) HTML uses instructions called tags to specify how text, graphics, video, and
sound are placed on a document and to create dynamic links to other documents and
objects stored in the same or remote computers. Using these links, a user need only point at
Java
Programming language that can
deliver only the software functionality
needed for a particular task, such as a
small applet downloaded from a network; can run on any computer and
operating system.
hypertext markup language
(HTML)
Page description language for creating Web pages and other hypermedia
documents.
212 Part II | Information Technology Infrastructure
XML (eXtensible Markup
Language)
General-purpose language that
describes the structure of a document
and supports links to multiple documents, allowing data to be manipulated by the computer. Used for both
Web and non-Web applications.
XHTML (Extensible Hypertext
Markup Language)
Hybrid of HTML and XML that provides more flexibility than HTML.
a highlighted keyword or graphic, click on it, and immediately be transported to another
document.
HTML programs can be custom written, but they also can be created using the HTML
authoring capabilities of Web browsers or of popular word processing, spreadsheet, data
management, and presentation graphics software packages. HTML editors, such as
Microsoft FrontPage and Adobe GoLive, are more powerful HTML authoring tool programs
for creating Web pages.
XML, which stands for eXtensible Markup Language, is a new specification originally
designed to improve usefulness of Web documents. Whereas HTML only determines how
text and images should be displayed on a Web document, XML describes what the data in
these documents mean so the data can be used in computer programs. In XML, a number is
not simply a number; the XML tag specifies whether the number represents a price, a date,
or a ZIP code. Table 6-7 illustrates the differences between HTML and XML.
By tagging selected elements of the content of documents for their meanings, XML
makes it possible for computers to automatically manipulate and interpret their data and perform operations on the data without human intervention. Web browsers and computer programs, such as order processing or enterprise software, can follow programmed rules for
applying and displaying the data. XML provides a standard format for data exchange.
XML is becoming a serious technology for Web-based applications. The key to XML is the
setting of standards (or vocabulary) that enable both sending and receiving parties to describe
data the same way. Each standard is contained in an XML Document Type Definition
(DTD), usually simply called a dictionary. For example, RosettaNet is an XML dictionary
developed by 34 leading companies within the PC industry. It defines all properties of a personal computer, such as modems, monitors, and cache memory. As a result, the entire PC
industry is now able to speak the same language. The entire supply chain of the industry can
now easily be linked without requiring business partners or customers to use a particular programming language, application, or operating system to exchange data. Companies can also
use XML to access and manipulate their own internal data without high software development costs.
XHTML (Extensible Hypertext Markup Language) is a hybrid combining features of
HTML and XML that has been recommended as a replacement for HTML by the World
Wide Web Consortium (which works with business and government to create Web standards.) XHTML reformulates HTML with XML document-type definitions, giving it additional flexibility and the ability to create Web pages that can be read by many different computing platforms and Net display devices.
Application Software Packages and
Productivity Software
software package
A prewritten, precoded, commercially
available set of programs that eliminates the need to write software programs for certain functions.
TABLE 6-7
Much of the software used in businesses today is not custom programmed but consists of
application software packages and desktop productivity tools. A software package is a prewritten, precoded, commercially available set of programs that eliminates the need for individuals or organizations to write their own software programs for certain functions. There are software packages for system software, but most package software is application software.
Software packages that run on mainframes and larger computers usually require professional programmers for their installation and support. However, there are also application
software packages developed explicitly for end users. Productivity software packages for word
Comparison of HTML and XML
Plain English
HTML
XML
Subcompact
<TITLE>Automobile</TITLE>
<AUTOMOBILETYPE=”Subcompact”>
4 passenger
<LI>4 passenger
<PASSENGER UNIT=”PASS”>4</PASSENGER>
$16,800
<LI>$16,800
<PRICE CURRENCY=”USD”>$16,800</PRICE>
Chapter 6 | Hardware and Software in the Enterprise 213
FIGURE 6-10
Text and the spell-checking option in Microsoft Word
Word processing software provides many easy-to-use options
to create and output a text document to meet a user’s specifications.
Source: Courtesy of Microsoft.
processing, spreadsheets, data management, presentation graphics, integrated software packages, e-mail, Web browsers, and groupware are the most widely used software tools among
business and consumer users.
Word Processing Software
Word processing software stores text data electronically as a computer file rather than on
paper. The word processing software allows the user to make changes in the document electronically in memory. This eliminates the need to retype an entire page to incorporate corrections. The software has formatting options to make changes in line spacing, margins,
character size, and column width. Microsoft Word and WordPerfect are popular word processing packages. Figure 6-10 illustrates a Microsoft Word screen displaying text, spelling
and grammar checking, and major menu options.
Most word processing software has advanced features that automate other writing tasks:
spelling checkers, style checkers (to analyze grammar and punctuation), thesaurus programs,
and mail merge programs, which link letters or other text documents with names and addresses
in a mailing list. The newest versions of this software can create and access Web pages.
Businesses that need to create highly professional looking brochures, manuals, or books
will likely use desktop publishing software for this purpose. Desktop publishing software
provides more control over the placement of text, graphics, and photos in the layout of a page
than does word processing software. Adobe PageMaker and QuarkXpress are two popular
desktop publishing packages.
word processing software
Software for electronically creating,
editing, formatting, and printing documents.
desktop publishing software
Software for producing professionalquality documents with capabilities
for design, layout, and work with
graphics
Spreadsheets
Electronic spreadsheet software provides computerized versions of traditional financial modeling tools, such as the accountant’s columnar pad, pencil, and calculator. An electronic
spreadsheet is organized into a grid of columns and rows. The power of the electronic spreadsheet is evident when one changes a value or values because all other related values on the
spreadsheet will be automatically recomputed.
Spreadsheets are valuable for applications in which numerous calculations with pieces of
data must be related to each other. Spreadsheets also are useful for applications that require
modeling and what-if analysis. After the user has constructed a set of mathematical relationships, the spreadsheet can be recalculated instantaneously using a different set of assumptions. A number of alternatives can easily be evaluated by changing one or two pieces of data
without having to rekey in the rest of the worksheet. Many spreadsheet packages include
spreadsheet
Software displaying data in a grid of
columns and rows, capable of easily
recalculating numerical data.
214 Part II | Information Technology Infrastructure
FIGURE 6-11
Spreadsheet software
Spreadsheet software organizes
data into columns and rows for
analysis and manipulation.
Contemporary spreadsheet software provides graphing abilities
for clear visual representation of
the data in the spreadsheets.
This sample break-even analysis
is represented as numbers in a
spreadsheet as well as a line
graph for easy interpretation.
Total fixed cost
Variable cost per unit
Average sales price
Contribution margin
Breakeven point
19,000.00
3.00
17.00
14.00
1,357
Custom Neckties Pro Forma Income Statement
Units sold
Revenue
Fixed cost
Variable cost
Total cost
Profit/Loss
0.00
0
19,000
0
19,000
679
11,536
19,000
2,036
21,036
1,357
23,071
19,000
4,071
23,071
2,036
34,607
19,000
6,107
25,107
2,714
46,143
19,000
8,143
27,143
(19,000)
(9,500)
0
9,500
19,000
Custom Neckties
Breakeven Analysis
Dollars (Thousands)
50
40
30
20
10
0
0.00
679
1,357
2,036
2,714
Units Sold
Fixed Cost
Total Cost
Revenue
graphics functions that can present data in the form of line graphs, bar graphs, or pie charts.
The most popular spreadsheet packages are Microsoft Excel and Lotus 1-2-3. The newest
versions of this software can read and write Web files. Figure 6-11 illustrates the output from
a spreadsheet for a break-even analysis and its accompanying graph.
Data Management Software
data management software
Software used for creating and
manipulating lists, creating files and
databases to store data, and combining information for reports.
Although spreadsheet programs are powerful tools for manipulating quantitative data, data
management software is more suitable for creating and manipulating lists and for combining
information from different files. PC database management packages have programming features and easy-to-learn menus that enable nonspecialists to build small information systems.
Data management software typically has facilities for creating files and databases and for
storing, modifying, and manipulating data for reports and queries. A detailed treatment of
data management software and database management systems can be found in Chapter 7.
Popular database management software for the personal computer includes Microsoft
Access, which has been enhanced to publish data on the Web. Figure 6-12 shows a screen
from Microsoft Access illustrating some of its capabilities.
Presentation Graphics
presentation graphics
Software to create professional-quality
graphics presentations that can incorporate charts, sound, animation, photos, and video clips.
Presentation graphics software allows users to create professional-quality graphics presentations. This software can convert numeric data into charts and other types of graphics and can
include multimedia displays of sound, animation, photos, and video clips. The leading presentation graphics packages include capabilities for computer-generated slide shows and
translating content for the Web. Microsoft PowerPoint, Lotus Freelance Graphics, and Aldus
Persuasion are popular presentation graphics packages.
Chapter 6 | Hardware and Software in the Enterprise 215
FIGURE 6-12
Data management software
This screen from Microsoft
Access illustrates some of its
powerful capabilities for managing and organizing information.
Integrated Software Packages and Software Suites
Integrated software packages combine the functions of the most important PC software
packages, such as word processing, spreadsheets, presentation graphics, and data management. This integration provides a more general-purpose software tool and eliminates redundant data entry and data maintenance. For example, the break-even analysis spreadsheet
illustrated in Figure 6-11 could be reformatted into a polished report with word processing
software without separately keying the data into both programs. Although integrated packages can do many things well, they generally do not have the same power and depth as
single-purpose packages.
Integrated software packages should be distinguished from software suites, which are fullfeatured versions of application software sold as a unit. Microsoft Office is an example.
There are different versions of Office for home and business users, but the core desktop tools
include Word processing software; Excel spreadsheet software; Access database software;
PowerPoint presentation graphics software; and Outlook, a set of tools for e-mail, scheduling, and contact management. Office 2000 and Office XP contain capabilities to support
integrated software package
A software package that combines two
or more applications, such as word
processing and spreadsheets, providing for easy transfer of data between
them.
Office 2000, Office XP, and Office
2003
Integrated desktop productivity software suites with capabilities for supporting collaborative work on the
Web or incorporating information
from the Web into documents.
Users can create professional-looking
electronic presentations incorporating text, diagrams, and other multimedia elements using presentation
graphics software. This slide was created using Microsoft PowerPoint.
216 Part II | Information Technology Infrastructure
collaborative work on the Web, including the ability to manage multiple comments and
revisions from several reviewers in a single document and the ability to automatically notify
others about changes to documents. Documents created with Office tools can be viewed
with a Web browser and published on the Web. Office XP users can automatically refresh
their documents with information from the Web, such as stock quotes and news flashes, and
manage their e-mail accounts from a single view. Office 2003 includes tools for creating
XML documents which can be linked to data in another application and for electronic note
taking, as well as a Business Contact Manager customer relationship application for small
businesses. Multiple Office 2003 users will be able to collaborate in the creation and revision of documents by sharing them as e-mail attachments and to control who has access to
documents. Some Office 2003 capabilities are not on the desktop but must be accessed as
services from the Microsoft server. OpenOffice (which can be downloaded over the Internet)
and Sun Microsystems’ StarOffice are low-cost alternatives to Microsoft Office tools that can
run on Linux.
E-mail Software
electronic mail (e-mail)
The computer-to-computer exchange
of messages.
Electronic mail (e-mail) is used for the computer-to-computer exchange of messages and
is an important tool for communication and collaborative work. A person can use a networked computer to send notes or lengthier documents to a recipient on the same network
or a different network. Many organizations operate their own electronic-mail systems, but
communications companies, such as MCI and AT&T, offer these services, along with
commercial online information services, such as America Online and public networks on
the Internet.
Web browsers and PC software suites have e-mail capabilities, but specialized e-mail software packages are also available for use on the Internet. In addition to providing electronic
messaging, e-mail software has capabilities for routing messages to multiple recipients, message forwarding, and attaching text documents or multimedia to messages.
Web Browsers
Web browser
An easy-to-use software tool for accessing the World Wide Web and the
Internet.
Web browsers are easy-to-use software tools for displaying Web pages and for accessing the
Web and other Internet resources. Web browser software features a point-and-click graphical
user interface that can be employed throughout the Internet to access and display information stored on computers at other Internet sites. Browsers can display or present graphics,
audio, and video information as well as traditional text, and they allow you to click on-screen
buttons or highlighted words to link to related Web sites. Web browsers have become the primary interface for accessing the Internet or for using networked systems based on Internet
technology. You can see examples of Web browser software by looking at the illustrations of
Web pages in each chapter of this text.
The leading commercial Web browser is Microsoft’s Internet Explorer. It includes capabilities for using e-mail, file transfer, online discussion groups and bulletin boards, and other
Internet services.
Groupware
groupware
Software that provides functions and
services that support the collaborative
activities of workgroups.
Groupware provides functions and services to support the collaborative activities of workgroups. Groupware includes software for group writing and commenting, information-sharing,
electronic meetings, scheduling, and e-mail and a network to connect the members of the
group as they work on their own desktop computers, often in widely scattered locations. Any
group member can review the ideas of others at any time and add to them, or individuals can
post a document for others to comment on or edit. Leading commercial groupware products
include Lotus Notes and OpenText’s LiveLink, and they have been enhanced so that they can
be integrated with the Internet or private intranets. Groove is a new groupware tool based on
peer-to-peer technology, which enables people to work directly with other people over the
Internet without going through a central server. Business versions of Microsoft’s Office 2003
software suite feature Web-based groupware services.
Chapter 6 | Hardware and Software in the Enterprise 217
OpenText LiveLink provides a Webbased environment for group collaboration, with capabilities for online
meeting and calendaring, storing
and sharing documents, and managing project teams and work spaces.
Software for Enterprise Integration
and E-Business
Chapters 2 and 3 discussed the growing organizational need to integrate functions and business processes to improve control, coordination, and responsiveness by allowing data and
information to flow freely between different parts of the organization. Poorly integrated
applications can create costly inefficiencies or slow customer service that become competitive liabilities. Alternative software solutions are available to promote enterprise integration.
One alternative, which we introduced in Chapter 2, is to replace isolated systems that
cannot exchange data with enterprise applications for customer relationship management,
supply chain management, knowledge management, and enterprise systems, that integrate
multiple business processes. Chapter 10 provides a detailed description of these enterprise
applications and their roles in digitally integrating the enterprise.
Enterprise applications are one of many paths to achieving integration. There are also
alternative software solutions that allow firms to achieve some measure of integration from
their existing systems. Most firms cannot jettison all of their legacy systems and create
enterprise-wide integration from scratch. A legacy system is a system that has been in existence for a long time and that continues to be used to avoid the high cost of replacing or
redesigning it. Nor do all firms want to get rid of their existing business processes underlying their legacy systems in order to use the business processes defined by enterprise systems
and other enterprise applications. Many existing legacy mainframe applications are essential to daily operations and very risky to change, and they can be made more useful if their
information and business logic can be integrated with other applications. One way to integrate various legacy applications is to use special software called middleware to create an
interface or bridge between two different systems. Middleware is software that connects
two otherwise separate applications, allowing them to communicate with each other and
to pass data between them. Middleware may consist of custom software written in-house or
a software package.
Instead of custom-writing software to connect one application to another, companies can
now purchase enterprise application integration (EAI) software to connect disparate applications or application clusters. This software uses special middleware enabling multiple systems to exchange data through a single software hub rather than building countless custom
software interfaces to link each system (see Figure 6-13). There are a variety of commercial
EAI software products, some featuring tools to link applications together through business
process modeling. The software allows system builders to model their business processes
graphically and define the rules that applications should follow to make these processes
legacy system
A system that has been in existence
for a long time and that continues to
be used to avoid the high cost of
replacing or redesigning it.
middleware
Software that connects two disparate
applications, allowing them to communicate with each other and to
exchange data.
enterprise application integration
(EAI) software
Software that works with specific software platforms to tie together multiple applications to support enterprise
integration.
218 Part II | Information Technology Infrastructure
FIGURE 6-13
Enterprise application integration (EAI) software versus traditional integration
Application
2
Application
1
Application
1
EAI Middleware
Application
3
Application
3
Application
5
Application
5
(a)
Application
2
Application
4
Application
4
(b)
EAI software (a) uses special middleware that creates a common platform through which all applications can freely communicate with each other. EAI requires much less programming than traditional
point-to-point integration (b).
Web services
Set of universal standards using
Internet technology for integrating
different applications from different
sources without time-consuming custom coding. Used for linking systems
of different organizations or for linking disparate systems within the same
organization.
work. The software then generates the underlying program instructions to link existing applications to each other so they can exchange data via messages governed by the rules of the
business processes. (An example of these rules might be “When an order has been placed,
the order application should tell the accounting system to send an invoice and should tell
shipping to send the order to the customer.”) WebMethods, Tibco, CrossWorlds, SeeBeyond,
BEA, and Vitria are leading enterprise application integration software vendors. Although
EAI software does not require as extensive organizational change or programming as the software for enterprise systems, behavioral changes are still required to make this level of integration work properly (Lee, Siau, and Hong, 2003).
Enterprise application integration software tools are product specific, meaning that they
can only work with certain pieces of application software and operating systems. For example, one EAI tool to connect a specific piece of sales order entry software to manufacturing,
shipping, and billing applications might not work with another vendor’s order entry software.
Web services provide a standardized alternative for dealing with integration. Web services
use XML and other open software and communication standards for exchanging information between two different systems, regardless of the operating systems or programming languages on which they are based. They can be used to build open standard Web-based applications linking systems of two different organizations, and they can also be used to create
applications that link disparate systems within a single company. Web services are not tied to
any one operating system or programming language and different applications can use them
to communicate with each other in a standard way without time-consuming custom coding.
For example, the health insurer Cigna Corporation created a special Web site called
MyCigna.com that enables users to track claims, order medications, compare drugs and hospitals by cost, and change physicians online. Cigna uses Web services to pull out this information from a number of different computer systems. Yellow Transportation, a trucking
company based in Overland, Kansas, uses Web services to provide shipping rates and schedules from its internal systems to customers connected to the Internet (Salkever, 2003).
Chapter 13 provides more detail on Web services and the ways they can be used to build new
information systems.
Middleware also plays an important role in the infrastructure typically used for e-commerce
and e-business. Review Figure 6-6, which illustrates a two-tier client/server architecture in
which an application’s interface, business logic, and data management components are split
Chapter 6 | Hardware and Software in the Enterprise 219
FIGURE 6-14
A multitiered architecture for e-commerce and e-business
Sales
Production
Accounting
HR
Internet
Client
Web
Server
Application
Server
This multitiered architecture has
middle layers for servicing Web
page requests and for providing
services to link user clients with
a back-end layer of corporate
systems and their data. In this
particular architecture, the
application server runs on its
own dedicated hardware.
Data
Back-End
Systems & Data
between client and server and the client communicates directly with the server. This had
been a very popular architecture for distributed computing. However, e-commerce and ebusiness require much more functionality, such as the ability to access and deliver Web
pages, to respond to user requests for data from a company’s product catalog, to take customer orders using Web page interfaces, or even to adjust advertising on the user’s display
screen based on the user’s characteristics. Figure 6-14 illustrates a multitiered architecture,
which has additional middle layers between the user client layer and the back-end layer
housing data management services. In the middle layers are various pieces of software for
servicing requests for Web pages, for executing business logic, and for accessing data. A Web
server is the software for locating and managing stored Web pages. It locates the Web pages
requested by a user on the computer where they are stored and delivers the Web pages to the
user’s computer. An application server is middleware software that handles all application
operations between a user and an organization’s back-end business systems. In an
Internet/intranet environment, it can be used to link the Web server to back-end systems and
data repositories, supplying the business logic for handling all application operations, including transaction processing and data access. The application server takes requests from the
Web server, runs the business logic to process transactions based on those requests, and provides connectivity to the organization’s back-end systems. For example, such middleware
would allow users to request data (such as an order) from the actual transaction system (such
as an order processing system) housing the data using forms displayed on a Web browser, and
it would enable the Web server to return dynamic Web pages based on information users
request. The application server may reside on the same computer as the Web server or on its
own dedicated computer. Application servers eliminate much of the need for custom programming, saving companies much time and cost in developing Web systems and in managing and updating them. Chapters 7 and 9 provide more detail on other pieces of software
that are used in multitiered client/server architectures for e-commerce and e-business.
The Window on Technology provides some examples of companies using XML, Web
services, and middleware tools to integrate their existing applications.
6.4 MANAGING HARDWARE
AND
Web server
Software that manages requests for
Web pages on the computer where
they are stored and that delivers the
page to the user’s computer.
application server
Software in a mult-tiered network
that provides the business logic for
handling all application operations
between a user and an organization’s
back-end business systems.
SOFTWARE ASSETS
Selection and use of computer hardware and software technology can have a profound
impact on business performance. Computer hardware and software, thus, represent important organizational assets that must be properly managed. We now describe the most important issues in managing hardware and software technology assets: understanding the new
technology requirements for electronic commerce and the digital firm, determining the
total cost of technology assets, and determining whether to own and maintain technology
assets or use external technology service providers for the firm’s IT infrastructure.
220 Part II | Information Technology Infrastructure
WINDOW ON TECHNOLOGY
Application Integration to the Rescue
I
n January 2000,
Aviall, a supplier of
airplane parts and
components based in
the Dallas–Fort
Worth, Texas, area
had lost control of its
inventory. After Aviall installed Lawson
Software to keep track of the availability
and prices of the 360,000 parts it buys and
sells to airplane operators, the wrong parts
went to the wrong customers. In some
instances, customers even received empty
boxes. Aviall’s price tracking software
could not work with its warehouse management and inventory control software
from Catalyst International, or its purchasing forecasting software from Xelus. Sales
plummeted.
Joe Lacik, Aviall’s recently hired CIO,
was charged with finding a way to get the
Lawson, Xelus, and Catalyst software to
exchange data correctly and to make sure
that the data could also be exchanged
with data from Aviall’s new customer
service software from Siebel Systems and
its Web commerce software from
BroadVision. New Era Networks of
Englewood, Colorado, a Sybase subsidiary specializing in enterprise application integration solutions, supplied Lacik
with a solution. New Era Networks tools
include XML-based application
“adapters,” which enable middleware to
connect to specific enterprise applications and legacy systems without extensive programming. The adapter figures
out common pieces of data among the
different software systems and transfers
each piece of required data between the
two programs, making sure it ends up in
the right place so that each program can
process the data. Using the New Era
middleware, Aviall was able to link the
Lawson and Siebel software so that a
sales representative could assure a customer that an order could be filled by
pulling information on prices and parts
availability from the Lawson system.
Linking the Xelus and Catalyst systems
eliminated the “empty box” problem by
ensuring the right parts got to the right
customers at the right time.
UNC Health Care, a nonprofit hospital
and medical care network affiliated with
the University of North Carolina, is using
Web services to integrate multiple systems,
including clinical systems and systems for
human resources, payroll, and financial
reporting. Its goal is to link into central
portals all the forms and tasks required in
each part of the business, and it is using
IBM’s WebSphere development tools to
get the job done. In the past, UNC had to
install and integrate
an application in
three steps. It had to analyze the connecting points between the applications that
needed to be integrated, write the software
code needed to connect the applications,
and put the connections together. UNC’s
information systems staff can bypass the
first two steps when they integrate using
Web services. And if they want to install
additional programs and make them interact with existing applications, they can
connect them to WebSphere to make
them compatible with back-end systems.
However, because the new Web services
make applications more interoperational, a
mistake in one application could affect
others more easily than if applications had
been linked point to point.
To Think About: How can enterprise application integration and Web services technology provide value for organizations?
What management, organization, and
technology issues should be addressed
when making the decision about whether
to use these technologies?
Sources: Tom Steinert Threlkeld, “Outside the
Box,” Baseline Magazine, January 2003; Eileen
Colkin Cuneo, “TECH-DRIVEN: How Web
Services Could Change IT Jobs,” Information Week,
April 21, 2003; and “Enterprise Integration,”
www.sybase.com/solutions, accessed April 23, 2003.
Hardware Technology Requirements for
Electronic Commerce and the Digital Firm
capacity planning
The process of predicting when a computer hardware system becomes saturated to ensure that adequate computing resources are available for
work of different priorities and that
the firm has enough computing
power for its current and future needs.
Electronic commerce and electronic business are placing heavy new demands on hardware
technology because organizations are replacing so many manual and paper-based processes
with electronic ones. Much larger processing and storage resources are required to process
and store the surging digital transactions flowing between different parts of the firm, and
between the firm and its customers and suppliers. Many people using a Web site simultaneously place great strains on a computer system, as does hosting large numbers of interactive
Web pages with data-intensive graphics or video.
Capacity Planning and Scalability
Managers and information systems specialists now need to pay more attention to hardware
capacity planning and scalability than they did in the past. Capacity planning is the process
of predicting when a computer hardware system becomes saturated. It considers factors such
Chapter 6 | Hardware and Software in the Enterprise 221
as the maximum number of users that the system can accommodate at one time; the impact
of existing and future software applications; and performance measures, such as minimum
response time for processing business transactions. Capacity planning ensures that the firm
has enough computing power for its current and future needs. For example, the Nasdaq
Stock Market performs ongoing capacity planning to identify peaks in the volume of stock
trading transactions and to ensure it has enough computing capacity to handle large surges
in volume when trading is very heavy.
Although capacity planning is performed by information systems specialists, input from
business managers is essential. Business managers need to determine acceptable levels of
computer response time and availability for the firm’s mission-critical systems to maintain
the level of business performance they expect. New applications, mergers and acquisitions,
and changes in business volume all impact computer workload and must be considered
when planning hardware capacity.
Scalability refers the ability of a computer, product, or system to expand to serve a
large number of users without breaking down. Electronic commerce and electronic
business both call for scalable IT infrastructures that have the capacity to grow with the
business as the size of a Web site and number of visitors increase. Organizations must
make sure they have sufficient computer processing, storage, and network resources to
handle surging volumes of digital transactions and to make such data immediately available online.
scalability
The ability of a computer, product, or
system to expand to serve a larger
number of users without breaking
down.
Total Cost of Ownership (TCO)
of Technology Assets
The purchase and maintenance of computer hardware and software is but one of a series
of cost components that managers must consider when selecting and managing hardware and software technology assets. The actual cost of owning technology resources
includes the original cost of acquiring and installing computer hardware and software,
as well as ongoing administration costs for hardware and software upgrades, maintenance, technical support, training, and even utility and real estate costs for running and
housing the technology. The total cost of ownership (TCO) model can be used to analyze these direct and indirect costs to help firms determine the actual cost of specific
technology implementations.
Hardware and software acquisition costs account for only about 20 percent of TCO, so
managers must pay close attention to administration costs to understand the full cost of the
firm’s hardware and software. It is possible to reduce some of these administration costs
through better management. The investment bank Morgan Stanley estimated that businesses
spent $130 billion in the past two years on unnecessary technology expenditures (Phillips,
2002). Many large firms are saddled with redundant, incompatible hardware and software
because their departments and divisions have been allowed to make their own technology
purchases. Their information technology infrastructures are excessively unwieldy and expensive to administer.
These firms could reduce their TCO through greater centralization and standardization
of their hardware and software resources (as did Nor-Cargo in the chapter-opening case).
Companies could reduce the size of the information systems staff required to support their
infrastructure if the firm minimized the number of different computer models and pieces of
software that employees are allowed to use. In a centralized infrastructure, systems can be
administered from a central location and troubleshooting can be performed from that location (David, Schuff, and St. Louis, 2002). Table 6-8 describes the most important TCO components to consider in a TCO analysis.
When all these cost components are considered, the TCO for a PC might run up to three
times the original purchase price of the equipment. “Hidden costs” for support staff, downtime, and additional network management can make distributed client/server architectures—
especially those incorporating handheld computers and wireless devices—more expensive
than centralized mainframe architectures.
total cost of ownership (TCO)
Designates the total cost of owning
technology resources, including initial purchase costs, the cost of hardware and software upgrades, maintenance, technical support, and
training.
222 Part II | Information Technology Infrastructure
TABLE 6-8
Total Cost of Ownership (TCO) Cost Components
Hardware acquisition
Purchase price of computer hardware equipment, including computers, terminals, storage, and printers
Software acquisition
Purchase or license of software for each user
Installation
Cost to install computers and software
Training
Cost to provide training to information systems specialists and end users
Support
Cost to provide ongoing technical support, help desks, and so forth
Maintenance
Cost to upgrade the hardware and software
Infrastructure
Cost to acquire, maintain, and support related infrastructure, such as
networks and specialized equipment (including storage backup units)
Downtime
Lost productivity if hardware or software failures cause the system to be
unavailable for processing and user tasks
Space and energy
Real estate and utility costs for housing and providing power for the
technology
Rent or Build Decisions: Using Technology
Service Providers
Some of the most important questions facing managers are “How should we acquire and
maintain our technology assets? Should we build and run them ourselves or acquire them
from outside sources?” In the past, most companies built and ran their own computer facilities and developed their own software. Today, more and more companies are obtaining
their hardware and software technology from external service vendors. Online services for
storage and for running application software have become especially attractive options for
many firms.
Online Storage Service Providers
storage service provider (SSP)
Third-party provider that rents out
storage space to subscribers over the
Web, allowing customers to store and
access their data without having to
purchase and maintain their own
storage technology.
Some companies are using storage service providers (SSPs) to replace or supplement their
own in-house storage infrastructure. A storage service provider (SSP) is a third-party
provider that rents out storage space to subscribers over the Web. Storage service providers
sell storage as a pay-per-use utility, allowing customers to store their data on remote computers accessed via networks without having to purchase and maintain their own storage
infrastructure and storage support staff. To be successful, SSPs must offer very high availability and reliability and also must keep up with the latest technology. SSPs are responsible for monitoring the stored data and for managing their own capacity, response time, and
reliability.
Application Service Providers (ASPs)
application service provider (ASP)
Company providing software that can
be rented by other companies over the
Web or a private network.
Section 6.2 described hardware capabilities for providing data and software programs to
desktop computers and over networks. It is clear that software will be increasingly delivered
and used over networks. Online application service providers (ASPs) are springing up to
provide these software services over the Web and over private networks. An application
service provider (ASP) is a business that delivers and manages applications and computer
services from remote computer centers to multiple users via the Internet or a private network. Instead of buying and installing software programs, subscribing companies can rent
the same functions from these services. Users pay for the use of this software either on a subscription or per transaction basis. The ASP’s solution combines package software applications and all of the related hardware, system software, network, and other infrastructure services that the customer otherwise would have to purchase, integrate, and manage
Chapter 6 | Hardware and Software in the Enterprise 223
FIGURE 6-15
Model of an Application Service Provider (ASP)
ASPs provide economies of scale
by remotely running software
applications for many subscribing companies. Applications are
transmitted over the Internet or
a private network to user client
computers.
Customer A
Customer B
Internet
Switch
Customer C
Customer D
Server
Farm
Private
WAN
ASP
Customer E
independently. The ASP customer interacts with a single entity instead of an array of technologies and service vendors.
The “time-sharing” services of the 1970s, which ran applications such as payroll on their
computers for other companies, were an earlier version of this application hosting. But
today’s ASPs run a wider array of applications than these earlier services and deliver many of
these software services over the Web. At Web-based services, servers perform the bulk of the
processing and the only essential program needed by users is a desktop computer running
either thin client software or a Web browser. Figure 6-15 illustrates one model of an ASP.
The ASP hosts applications at its own site, often on servers in a server farm. Servers are not
dedicated to specific customers but are assigned applications based on available capacity.
The application is then transmitted to the customer via the Internet or a private Wide-Area
Network (WAN—see Chapter 8).
Large and medium-size businesses are using ASPs for enterprise systems, sales force
automation, or financial management, and small businesses are using them for functions
such as invoicing, tax calculations, electronic calendars, and accounting. ASP vendors are
starting to provide tools to integrate the applications they manage with clients’ internal systems or with applications hosted by different vendors (McDougall, 2003).
Companies are turning to this “software service” model as an alternative to developing
their own software. Some companies will find it much easier to “rent” software from
another firm and avoid the expense and difficulty of installing, operating, and maintaining
the hardware and software for complex systems, such as enterprise resource planning (ERP)
systems (Walsh, 2003). The ASP contracts guarantee a level of service and support to ensure
that the software is available and working at all times. Today’s Internet-driven business environment is changing so rapidly that getting a system up and running in three months
instead of six could mean the difference between success and failure. Application service
providers also enable small and medium-size companies to use applications that they otherwise could not afford.
Companies considering the software service model need to carefully assess application
service provider costs and benefits, weighing all management, organizational, and technology issues, including the ASP’s ability to integrate with existing systems and deliver the level
of service and performance it has promised (Susarla, Barua, and Whinston, 2003). In some
cases, the cost of renting software can add up to more than purchasing and maintaining the
application in-house. Yet there may be benefits to paying more for software through an ASP
if this decision allows the company to focus on core business issues instead of technology
challenges. More detail on application service providers can be found in Chapter 13.
224 Part II | Information Technology Infrastructure
TABLE 6-9
Examples of Technology Service Providers
Type of Service Provider
Description
Example
Storage service provider
Provides online access over networks to storage devices
and storage area network technology.
IBM Managed Storage Services (MSS)
Application service
provider
Uses centrally managed facilities to host and manage
access to package applications delivered over networks
on a subscription basis.
Corio Inc. offers a suite of hosted
enterprise application software over a
network for a fixed monthly fee.
Management service
provider
Manages combinations of applications, networks, systems,
storage, and security, as well as providing Web site and
systems performance monitoring to subscribers over the
Internet.
Totality, SevenSpace/Nuclio
Business continuity
service provider
Defines and documents procedures for planning and
recovering from system malfunctions that threaten vital
business operations.
Comdisco disaster recovery, rapid
recovery, and continuous Web
availability services
Other Types of Service Providers
Other types of specialized service providers provide additional resources for helping organizations manage their technology assets. Management service providers can be enlisted to
manage combinations of applications, networks, storage, and security, as well as to provide
Web site and systems performance monitoring. Business continuity service providers offer disaster recovery and continuous Web availability services to help firms continue essential operations when their systems malfunction (see Chapter 15). Table 6-9 provides examples of the
major types of technology service providers.
| MAKE IT YOUR BUSINESS
Finance and Accounting
One of the earliest tasks assigned to computers was automating calculations for
finance and accounting, and these functions have remained high-priority targets for computerization.
Many application software packages for individuals as well as for
large businesses support financial processes, such as corporate
accounting, tax calculations, payroll processing, or investment
planning. Calculating the total cost of ownership (TCO) of technology assets usually requires models and expertise supplied by
finance and accounting. You can find examples of finance and
accounting applications on pages 193–194, 210, and 232–233.
Human Resources
Hardware and software technologies are changing very rapidly, providing many new productivity tools to employees and
powerful software packages and services for the human
resources department. Employees will need frequent retraining
in order to use these tools effectively. You can find examples
of human resources applications on pages 232-233.
Manufacturing and Production
Many manufacturing applications are based on client/server
networks, which use networked computers to control the
flow of work on the factory floor. Handheld computers
and bar code scanners are widely used to track items in
inventory and to track package shipments. XML provides a
set of standards through which all systems of participants
in an industry supply chain can exchange data with each
other without high expenditures for specialized translation
programs and middleware. You can find examples of manufacturing and production applications on pages 193-194
and 222.
Sales and Marketing
Sales and marketing has benefited from hardware and software technologies that provide customers and sales staff
with rapid access to information, responses to customer
questions, and order taking. Web browser software provides
an easy-to-use interface for accessing product information
or placing orders over the Web, whereas e-mail software is a
quick and inexpensive tool for answering customer queries.
Web sites can be enhanced with Java applets that allow
users to perform calculations or view interactive product
demonstrations on using standard Web browser software. You
can find examples of sales and marketing applications on
page 222.
Chapter 6 | Hardware and Software in the Enterprise 225
Utility Computing
Many of the service providers we have just described lease information technology services
using long-term fixed price contracts. IBM is championing a utility computing model in
which companies pay technology service providers only for the amount of computing power
and services they use, much as they would pay for electricity. In this “pay-as-you-go” model of
computing, which is sometimes called on-demand computing or usage-based pricing, customers would pay more or less for server capacity and storage, depending on how much of
these resources they actually used during a specified time period. The pay-as-you-go model
could help firms reduce expenditures for excess computing capacity that may only be needed
during peak business periods as well as expenditures for full-time information systems support personnel, floor space, and backup computers. IBM offers a full range of usage-based
services, including server capacity, storage space, software applications, and Web hosting.
Other vendors, including Hewlett-Packard, Sun Microsystems, and Electronic Data Systems
(EDS), also offer some utility computing services.
utility computing
Model of computing in which companies pay only for the information
technology resources they actually use
during a specified time period. Also
called on-demand computing or
usage-based pricing.
Summary
1. What computer processing and storage capability does our
organization need to handle its information and business
transactions?
Managers should understand the alternative computer hardware technologies available for processing
and storing information so that they can select the
right technologies for their businesses. Modern computer systems have six major components: a central
processing unit (CPU), primary storage, input devices,
output devices, secondary storage, and communications devices. All of these components need to work
together to process information for the organization.
The CPU is the part of the computer where the
manipulation of symbols, numbers, and letters occurs.
The CPU has two components: an arithmetic-logic
unit and a control unit.
The CPU is closely tied to primary memory, or primary storage, which stores data and program instructions
temporarily before and after processing. Several different
kinds of semiconductor memory chips are used with primary storage: RAM (random access memory) is used for
short-term storage of data and program instructions, and
ROM (read-only memory) permanently stores important
program instructions.
Computer processing power depends in part on the
speed of their microprocessors, which integrate the computer’s logic and control on a single chip. Most conventional computers process one instruction at time, but
computers with parallel processing can process multiple
instructions simultaneously. The principal secondary
storage technologies are magnetic disk, optical disk, and
magnetic tape. Optical disks can store vast amounts of
data compactly. CD-ROM disk systems can only be read
from, but CD-RW (rewritable) optical disk systems are
now available.
The principal input devices are keyboards, computer
mice, touch screens, magnetic ink and optical character
recognition devices, pen-based instruments, digital
scanners, sensors, audio input devices, and radiofrequency identification devices. The principal output
devices are cathode ray tube terminals, printers, and
audio output devices. In batch processing, transactions
are accumulated and stored in a group until the time
when it is efficient or necessary to process them. In
online processing, the user enters transactions into a
device that is directly connected to the computer system. The transactions are usually processed immediately. Multimedia integrates two or more types of media,
such as text, graphics, sound, voice, full-motion video,
still video, and/or animation into a computer-based
application.
2. What arrangement of computers and computer processing
would best benefit our organization?
Managers should understand the capabilities of various categories of computers and arrangements of computer processing. The type of computer and arrangement of processing power that should be used by the
business depends on the nature of the organization and
its problems.
Computers are categorized as mainframes, midrange
computers, PCs, workstations, or supercomputers.
Mainframes are the largest computers; midrange computers can be minicomputers used in factory, university,
or research lab systems, or servers providing software and
other resources to computers on a network. PCs are
desktop or laptop machines; workstations are desktop
machines with powerful mathematical and graphic
capabilities; and supercomputers are sophisticated, powerful computers that can perform massive and complex
226 Part II | Information Technology Infrastructure
computations rapidly. Because of continuing advances
in microprocessor technology, the distinctions between
these types of computers are constantly changing.
Computers can be networked together to distribute
processing among different machines. In the client/server
model of computing, computer processing is split
between “clients” and “servers” connected via a network.
The exact division of tasks between client and server
depends on the application. Network computers are
pared-down desktop machines with minimal or no local
storage and processing capacity. They obtain most or all
of their software and data from a central network server.
Whereas network computers help organizations maintain
central control over computing, peer-to-peer computing
puts processing power back on users’ desktops, linking
individual PCs, workstations, or other computers through
the Internet or private networks to share data, disk space,
and processing power for a variety of tasks. Grid computing is a form of peer-to-peer computing that breaks down
problems into small pieces that can run on many separate
machines organized into a computational grid.
3. What kind of software and software tools do we need to
run our business? What criteria should we use to select our
software technology?
Managers should understand the capabilities of various
types of software so they can select software technologies
that provide the greatest benefit for their firms. There are
two major types of software: system software and application software. System software coordinates the various
parts of the computer system and mediates between application software and computer hardware. Application software is used by application programmers and some end
users to develop specific business applications.
The system software that manages and controls the
activities of the computer is called the operating system.
The operating system acts as the chief manager of the
information system, allocating, assigning, and scheduling
system resources, and monitoring the use of the computer. Multiprogramming, multiprocessing, virtual storage, and time-sharing are operating system capabilities
that enable computer system resources to be used more
efficiently. Other system software includes computerlanguage translation programs that convert programming
languages into machine language and utility programs
that perform common processing tasks.
PC operating systems have developed sophisticated
capabilities such as multitasking and support for multiple users on networks. Leading PC operating systems
include Windows XP, Windows 98 and Windows Me,
Windows Server 2003 and Windows 2000, Windows
CE, UNIX, Linux, the Macintosh operating system, and
DOS. PC operating systems and many kinds of application software now use graphical user interfaces.
The general trend in software is toward user-friendly,
high-level languages that both increase professional programmer productivity and make it possible for end users
to work directly with information systems. The principal
programming languages used in business include
COBOL, C, C, and Visual Basic, and each is
designed to solve specific types of problems. Fourthgeneration languages are less procedural than conventional programming languages and enable end users to
perform many software tasks that previously required
technical specialists. They include popular PC software
tools, such as word processing, spreadsheet, data management, presentation graphics, and e-mail software,
along with Web browsers and groupware. Enterprise software, middleware, and enterprise application integration
software are all software tools for promoting enterprisewide integration of business processes and information
system applications.
Software selection should be based on criteria such as
efficiency, compatibility with the organization’s technology platform, vendor support, and whether the software
tool is appropriate for the problems and tasks of the
organization.
4. What new software technologies are available? How
would they benefit our organization?
Object-oriented programming tools and new programming languages such as Java, hypertext markup language
(HTML), and eXtensible Markup Language (XML), can
help firms create software rapidly and efficiently and produce applications based on the Internet or data in Web
sites. Object-oriented programming combines data and
procedures into one object, which can act as an independent software building block. Each object can be used in
many different systems without changing program code.
Java is an object-oriented programming language. It
can deliver precisely the software functionality needed for
a particular task as a small applet that is downloaded from
a network. Java can run on any computer and operating
system. HTML is a page description language for creating
Web pages. XML is a language for creating structured documents in which data are tagged for meanings. The
tagged data in XML documents and Web pages can be
manipulated and used by other computer systems. XML
can thus be used to exchange data between Web sites and
different legacy systems within a firm and between the systems of different partners in a supply chain.
Enterprise application integration (EAI) software and
Web services can be used to integrate disparate applications. EAI software enables multiple systems to exchange
data through a single hub to support new business
processes without extensive custom programming, but it
works with specific applications and operating systems.
Web services use open standards to link disparate systems
belonging to a single organization or to multiple organizations and can be used with any type of application and
operating system software.
5. How should we acquire and manage the firm’s hardware
and software assets?
Chapter 6 | Hardware and Software in the Enterprise 227
Computer hardware and software technology can
either enhance or impede organizational performance.
Both hardware and software are major organizational
assets that must be carefully managed. Electronic commerce and electronic business have put new strategic
emphasis on technologies that can store vast quantities of
transaction data and make them immediately available
online. Managers and information systems specialists
need to pay special attention to hardware capacity planning and scalability to ensure that the firm has enough
computing power for its current and future needs.
They also need to balance the costs and benefits of
owning and maintaining their own hardware and software versus renting these assets from external service
providers. Online storage service providers (SSPs) rent
out storage space to subscribers over the Web, selling
computer storage as a pay-per-use utility. Application
service providers (ASPs) rent out software applications
and computer services from remote computer centers to
subscribers over the Internet or private networks. In a
utility computing model, companies pay technology
service providers only for the amount of computing
power and services that they actually use.
Calculating the total cost of ownership (TCO) of the
organization’s technology assets can help provide managers with the information they need to manage these
assets and decide whether to rent or own these assets.
The total cost of owning technology resources includes
not only the original cost of computer hardware and software but also costs for hardware and software upgrades,
maintenance, technical support, and training.
Key Terms
Application server, 221
Application service provider (ASP), 224
Application software, 225
Arithmetic-logic unit (ALU), 195
Batch processing, 201
Bit, 195
Byte, 195
C, 209
C, 209
Capacity planning, 222
CD-ROM (compact disk read-only
memory), 199
CD-RW (CD-ReWritable), 199
Central processing unit (CPU), 195
Centralized processing, 203
Client, 203
Client/server computing, 203
COBOL (COmmon Business Oriented
Language), 209
Compiler, 206
Computer, 194
Control unit, 196
Data management software, 216
Desktop publishing software, 215
Digital video disk (DVD), 199
Distributed processing, 203
Downsizing, 204
Electronic mail (e-mail), 218
Enterprise application integration (EAI)
software, 219
Floppy disk, 198
Fourth-generation language, 210
Graphical user interface (GUI), 207
Grid computing, 205
Groupware, 218
Hard disk, 198
Hypertext markup language (HTML), 213
Integrated software package, 217
Java, 213
Legacy system, 219
Linux, 208
Machine cycle, 196
Machine language, 209
Magnetic disk, 198
Magnetic tape, 199
Mainframe, 202
Massively parallel computers, 197
Megahertz, 197
Microprocessor, 197
Middleware, 219
Midrange computer, 202
Minicomputer, 203
MP3 (MPEG3), 202
Multimedia, 202
Natural language, 211
Network-attached storage (NAS), 199
Network computer (NC), 204
Object-oriented programming, 211
Office 2000, Office XP, and Office 2003, 217
Online processing, 201
Open-source software, 208
Operating system, 206
Parallel processing, 197
Peer-to-peer computing, 205
Personal computer (PC), 203
Presentation graphics, 216
Primary storage, 195
Program, 205
Query language, 211
Radio-frequency identification (RFID), 200
RAID (Redundant Array of Inexpensive
Disks), 198
RAM (random access memory), 196
Reduced instruction set computing
(RISC), 197
ROM (read-only memory), 197
Scalability, 223
Secondary storage, 198
Server, 203
Server farm, 203
Software package, 214
Source code, 206
Spreadsheet, 215
Storage area network (SAN), 199
Storage service provider (SSP), 224
Streaming technology, 202
Supercomputer, 203
System software, 205
Total cost of ownership (TCO), 223
UNIX, 208
Utility computing, 227
Visual Basic, 209
Visual programming, 209
Web browser, 218
Web server, 221
Web services, 228
Windows 98, 207
Windows 2000, 207
Windows Server 2003, 207
Windows XP, 207
Word processing software, 215
Workstation, 203
XHTML (Extensible Hypertext Markup
Language), 214
XML (eXtensible Markup Language), 214
Review Questions
1. What are the components of a contemporary computer
system?
2. Name the major components of the CPU and the function of each.
3. Distinguish between serial, parallel, and massively parallel processing.
4. List the most important secondary storage media. What
are the strengths and limitations of each?
228 Part II | Information Technology Infrastructure
5. List and describe the major computer input and output
devices.
6. What is the difference between batch and online
processing?
7. What is multimedia? What technologies are involved?
8. What is the difference between a mainframe, a minicomputer, a server, and a PC? Between a PC and a
workstation?
9. Compare the client/server, network computer, and peerto-peer models of computing.
10. What are the major types of software? How do they differ in terms of users and uses?
11. What is the operating system of a computer? What does
it do? What roles do multiprogramming, virtual storage,
time-sharing, and multiprocessing play in the operation
of an information system?
12. List and describe the major PC operating systems.
13. List and describe the major application programming
languages. How do they differ from fourth-generation
languages?
14. What is object-oriented programming? How does it differ from conventional software development?
15. What are Java, HTML, and XML? Compare their
capabilities. Why are they becoming important?
16. Name and describe the most important PC productivity
software tools.
17. Name and describe the kinds of software that can be
used for enterprise integration and e-business.
18. List and describe the principal issues in managing
hardware and software assets.
Discussion Questions
1. Why is selecting computer hardware and software for
the organization an important management decision?
What management, organization, and technology
issues should be considered when selecting computer
hardware?
2. Should organizations use application service providers
(ASPs) and storage service providers (SSPs) for all their
software and storage needs? Why or why not? What
management, organization, and technology factors
should be considered when making this decision?
Application Software Exercise: Spreadsheet Exercise:
Evaluating Computer Hardware and Software Options
You have been asked to obtain pricing information on hardware and software for an office of 30 people. Using the
Internet, get pricing for 30 PC desktop systems (monitors,
computers, and keyboards) manufactured by IBM, Dell,
and Compaq as listed at their respective corporate Web
sites. (For the purposes of this exercise, ignore the fact that
desktop systems usually come with preloaded software packages.) Also obtain pricing on 15 monochrome desktop
printers manufactured by Hewlett-Packard and by Xerox.
Each desktop system must satisfy the minimum specifications shown in the following table:
Minimum Desktop Specifications
Processor speed (in gigahertz)
Hard drive (in gigabytes)
RAM (in megabytes)
CD-ROM speed
Monitor (diagonal measurement)
2 GHz
40 GB
256 MB
48 speed
17 inches
Each desktop printer must satisfy the minimum specifications shown in the following table:
Minimum Monochrome Printer Specifications
Print speed (pages per minute)
Print quality
Network ready?
Maximum price/unit
12
600 x 600
Yes
$1,000
After getting pricing on the desktop systems and printers,
obtain pricing on 30 copies of Microsoft’s Office XP or
Office 2003, the most recent versions of Corel’s WordPerfect
Office and IBM’s Lotus SmartSuite application packages,
and on 30 copies of Microsoft Windows XP Professional edition. The application software suite packages come in various versions, so be sure that each package contains programs
for word processing, spreadsheet analysis, database analysis,
graphics preparation, and e-mail.
Prepare a spreadsheet showing your research results for
the desktop systems, for the printers, and for the software. Use
your spreadsheet software to determine the desktop system,
printer, and software combination that will offer both the best
performance and pricing per worker. Since every two workers
will share one printer (15 printers/30 systems), assume only
half a printer cost per worker in the spreadsheet. Assume that
your company will take the standard warranty and service
contract offered by each product’s manufacturer.
Chapter 6 | Hardware and Software in the Enterprise 229
Dirt Bikes U.S.A.:
Analyzing the Total Cost of Ownership (TCO) of Desktop Software Assets
Software requirements: Spreadsheet software
Web browser software
Electronic presentation software (optional)
Dirt Bikes would like to replace the desktop office productivity software used by its corporate administrative staff,
consisting of its controller, accountant, administrative
assistant, two human resources specialists, and three secretaries—a total of eight users. These employees need a suite
that has word processing, spreadsheet, database, electronic
presentation, and e-mail software tools. Occasionally, they
would like to use these software tools to publish Web pages
or to access data from the Internet. Use the Web to
research and compare the pricing and capabilities of
either Microsoft Office 2003 or Office XP versus Sun
StarOffice.
1. Use your spreadsheet software to create a matrix comparing the prices of each software suite as well as their
functionality. Identify the lowest-price system that
meets Dirt Bikes’ requirements.
2. You have learned that hardware and software purchase
costs only represent part of the total cost of ownership
(TCO) of technology assets and that there are additional cost components to consider. For this particular
software system, assume that one-time installation
costs $25 per user, one-time training will cost $100 per
user, annual technical support will cost 30 percent of
initial purchase costs, and annual downtime another
15 percent of purchase costs. What is the total cost of
ownership of Dirt Bikes’ new desktop productivity systems over a three-year period?
3. (Optional) If possible, use electronic presentation software to summarize your findings for management.
Electronic Business Project:
Planning and Budgeting for a Sales Conference
The Foremost Composite Materials Company is planning a
two-day sales conference for October 15–16, starting with a
reception on the evening of October 14. The conference consists of all-day meetings that the entire sales force, numbering
125 sales representatives and their 16 managers, must attend.
Each sales representative requires his or her own room and
the company needs two common meeting rooms, one large
enough to hold the entire sales force plus a few visitors (200)
and the other able to hold half the force. Management has set
a budget of $75,000 for the representatives’ room rentals. The
hotel must also have such services as overhead and computer
projectors as well as business center and banquet facilities. It
also should have facilities for the company reps to be able to
do work in their rooms and to enjoy themselves in a swimming pool or gym facility. The company would like to hold
the conference in either Miami or New Orleans.
Foremost usually likes to hold such meetings in Hiltonor Marriott-owned hotels. Use their sites, Hilton.com
(www.hilton.com) and Marriott.com (www.marriott.com) to
select a hotel in whichever of these cities that would enable
the company to hold its sales conference within its budget.
Other features you should use to help select a hotel include
convenience to the airport and to areas of interest to tourists
(such as New Orleans’ French Quarter or Miami’s South
Beach) in case some employees would like to take an extra
day or two to vacation. Link to the two sites’ home pages,
and search them to find a hotel that meets Foremost’s sales
conference requirements. Once you have selected the hotel,
locate flights arriving the afternoon prior to the conference
because the attendees will need to check in the day before
and attend your reception the evening prior to the conference. Your attendees will be coming from Los Angeles (54),
San Francisco (32), Seattle (22), Chicago (19), and
Pittsburgh (14). Determine costs of each airline ticket from
these cities. When you are finished, draw up a budget for the
conference. The budget will include the cost of each airline
ticket, the room cost, and $40 per attendee per day for food.
What was your final budget? Which did you select as the
best hotel for the sales conference and why? How did you
find the flights?
Group Project:
Capacity Planning for E-Commerce and E-Business
Your company implemented its own electronic commerce
site using its own hardware and software, and business is
growing rapidly. The company Web site has not experienced
any outages, and customers are always able to have requests
for information or purchase transactions processed very rapidly. Your information systems department has instituted a
formal operations review program that continuously monitors key indicators of system usage that affect processing
230 Part II | Information Technology Infrastructure
capacity and response time. The following report for management illustrates two of those indicators: daily CPU usage
and daily I/O usage for the system (hours are for U.S.
Eastern Standard Time I/O.) usage measures the number of
times a disk has been read.
Your server supports primarily U.S. customers who
access the Web site during the day and early evening. I/O
usage should be kept below 70 percent if the CPU is very
busy so that the CPU does not waste machine cycles looking for data. I/O usage is high between 1 A.M. and 6 A.M.
because the firm backs up its data stored on disk when the
CPU is not busy.
CPU and I/O Usage
100
90
Percent Utilized
80
70
60
50
CPU
I/O
40
30
20
10
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hour
Daily CPU and I/O usage (hours are for U.S. Eastern Standard Time).
1. Anticipated e-commerce business over the next year
is expected to increase CPU usage and I/O usage by
20 percent between 1 P.M. and 9 P.M. and by 10 percent during the rest of the day. Does your company
have enough processing capacity to handle this
increased load?
2. What would happen if your organization did not
attend to capacity issues?
3. If possible, use electronic presentation software to
present your findings.
Case Study:
Zurich North America Hunts Down Its IT Assets
Asset management
of information technology is an oftenignored specialty,
but Toronto-based
Zurich North America
Canada has turned to
it as one more area
where the company is attempting to
reduce its costs. Zurich North America is a
leading commercial property-casualty
insurance provider serving the multinational, middle market, and small business
sectors in Canada and the United States.
The company is the Canadian arm of
Zurich North America, which in turn is a
subsidiary of the Zurich Financial Services
Group headquartered in Zurich,
Switzerland. The Zurich group, which was
established in 1872, is an insurancebased financial services provider with
more than 70,000 employees in more
than 60 countries around the world. It
mostly handles insurance for corporations. The company has a very high credit
rating—A.M. Best rates its bonds as A
(excellent), and Standard & Poor’s rates
them as A (strong). Nonetheless, it
encountered financial problems arising
from what its CEO James J. Schiro calls
“weak and fragile equity markets and
record low interest rates.” As a result, the
company reported major financial losses
in 2001 and 2002. The Canadian arm
shared in these problems, which is one
reason that ING Canada acquired Zurich
Canada’s personal property and casualty
insurance operations. But Zurich Canada
still needs to find ways to cut its costs.
When John Fort became Zurich North
American Canada’s CIO in 2002, his first
task was to determine where the money
was going. Like so many other companies
in the 1990s, Zurich Canada’s information
technology procurement was decentralized, allowing different departments and
groups to purchase the same items at different prices. The company’s senior management did not know how its money was
being spent on technology. Zurich’s information systems group had not been keeping good records on its purchases so
management could not tell what hardware
and software had cost and for what the
technology was currently being used.
When Fort became CIO, Nikki Cule, who
was supervisor of technology asset management, observed, “We were doing a lot
of the right things but also a lot of the
wrong things.”
For example, two different units were
purchasing software, so that while one
unit purchased full office software packages for $900 each, the other unit was
purchasing licenses to use the software at
a price of only $200 each. In another
Chapter 6 | Hardware and Software in the Enterprise 231
example, the company was licensing four
separate COBOL compilers at $30,000
annually for each copy, but COBOL use was
declining. In this case, an investigation
revealed that the company did still need
three compilers but not four, and that one
investigation saved the company $30,000
each year. The company needed not only
to centralize this information so that the
extra expenditures could be eliminated
but also someone to find out what each
piece of software and hardware was used
for and why it was necessary. Fort said,
“Not only have we found that we don’t
need a particular piece of software any
more,” but, he continued, “we’ve also
found that we were running jobs with it
[even though] no one was using the
results.” And so, he concluded, “Not only
are you paying for an asset, but in some
cases that asset is doing something that’s
of no use to the organization.”
Centralizing not only the information on
what was being purchased but also what
it was used for would likely enable the
company’s asset management group to
save thousands of dollars annually.
To make matters even worse, a lot of
the hardware and software was not purchased but instead was being leased. In
1999, the then-CIO, Kerry Long, realized
that a number of hardware and software
assets were coming off lease, and yet the
department did not know why the assets
had been leased or what changes had
been made that may have made a renewal
of a lease unnecessary. IT did not even
know what department or even which city
each asset was used in or who was using it
or why. The leasing companies had systems to monitor what they had leased out,
and they were very willing to give this
software to the companies leasing their
hardware and software (in this case Zurich
Canada). However, Cule said, “These tools
were great at keeping track of what was
important to the leasing company, but not
necessarily what was important to us.” Her
point was this: “The kinds of information
we needed weren’t there.” Her conclusion:
“We realized the substantial cost if we didn’t get our act together, so we quickly had
to figure out what was off lease and how
to deal with it.”
The leasing companies had built their
own tracking software using Lotus Notes
groupware, and Zurich IT decided that its
company should use the same software
to build its own tracking system, which
it called Zurich Asset Manager or ZAM.
The reason Cule and her staff decided to
use Lotus Notes was that it “was something that everybody in Zurich had
access to.” She saw it as a valuable tool,
giving the example that “the tech support people in our Vancouver office could
go into ZAM and see what was in their
office, and if they switched equipment
they could update the database.” When
they were building ZAM, Cule’s group
included processes to move, add, and
change records on their software and
hardware. The result was that, with ZAM,
when something was going off lease, the
staff seldom had to track the item down,
find out who had it and what, if anything, it was being used for. If the IT
technicians kept ZAM current, the information would be easily available. Later
on, Cule estimated that the result has
been that the system had become 80 to
90 percent accurate. ZAM was subsequently assigned to track fax machines,
cell phones, and pagers.
The issue that Cule and her staff did
have to face at first was to get ZAM to
work properly, which meant to get the IT
staff everywhere to use it, to keep it
up-to-date, and to make changes when
necessary. They had been successful in
explaining its importance to the whole IT
staff, so, according to Fort, “Our users
are saying ‘If I’m not playing the game
I’m going to end up paying more than I
need to pay.’ So they now understand the
importance of this, and they’re all becoming amateur asset managers.” He further
observed, “They’ve become extremely
interested in helping us do a great job of
tracking assets and making sure that
records are accurate. We’re not all the
way there yet, but at least they’re on
board.” Cule sees herself as also having
another role in ensuring people use ZAM.
Her view is that, “People don’t like the
fact that they can’t have two computers,
but it costs a lot to have a laptop in
addition to a desktop.” So her job, as she
sees it, is to play cop.
John Fort summed the whole issue up
this way: “Just tracking your assets isn’t
good enough. You have to know why you
have that asset—what value it delivers
to the business. If you can’t answer that
question,” he continued, “then you’d better find a way to answer it. You have to
birddog your assets to the point of ‘why.’”
He concluded, “If we spend a dollar on an
unnecessary compiler, that’s a dollar we
can’t spend on something the business
can really use. So it’s the value quotient
of the asset that we’re adding to the
task.” Why now? It was triggered by the
enormous amounts being spent on technology without understanding its value.
Fort and Cule concluded that they had
learned some valuable lessons. One was
that they needed to think ahead: Don’t
consider only the beginning, the purchase or lease. Think about what can
happen in two or three years. A second
lesson they learned was to put good
processes in place. Without the
processes, such as ZAM, things become
hidden, and a lot of work will be needed
to unearth that information, and probably that will never happen. They also
concluded that they want to learn from
others, and so, for example, they attend
all the meetings of the Canadian Software
Asset Management Users Group. Another
lesson Fort particularly stressed is to persevere because the effort will pay for
itself many times over. Cule emphasized
the need to shamelessly promote your
effort so that management and staff
know about it and understand its value.
You need to show that you are actually
saving the company real dollars by your
effort. Finally, Fort focused on the need
for everyone to know the value of their
assets: Know its shelf life, know when to
replace it, when to move it on, and when
to eliminate it.
Sources: David Carey, “The Case of the Elusive
Assets,” http://www.itworldcanada.com/,
February 2003; Canadian Software Asset
Management Users Group, “Company Profile,”
Technology Asset Management Inc., April 2003;
Press Release, “2002 Results and Progress on
2003 Restructuring Program,” Zurich North
America, February 27, 2003; and ING Press
Release, “ING, Zurich Granted Regulatory
Approvals for Strategic Alliance,” http://www.
ingcanada.com, January 25, 2002.
C ASE S TUDY Q UESTIONS
1. Evaluate Zurich North America Canada
using the value chain and competitive
forces models. Why did IT asset management become so important to this
company?
2. Why did Zurich North America have
problems managing its hardware and
software assets? How serious were
these problems? What management,
organization, and technology factors
were responsible for those problems?
3. How did Zurich North America solve its
asset management problem? What managerial and technology tools did it use?
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement